Hello. I'm setting up a computer that will run 'CentOS 6 server'. The MB is an Asus with a hw raid controller (Promise PDC-20276), which I want to use in RAID-1 mode. I noted (from a MB website) that it also needs a driver - which is probably why it's called a 'fakeraid'.
So, I've been trying to determine if any recent kernels support this chip. Using google.com/linux, I found lots of hits dating about 2002 - 04, and referencing the 2.4 kernel (which had the driver compiled into it). But, nothing newer.
I checked kernel.org and kernelnewbies.org - I see that raid-1 is supported. But, I can't find any reference to this chip.
How can I find out what drivers are compiled into a given kernel? Or, basically what hardware a given kernel supports?
Michael Klinosky wrote on Sat, 29 Jan 2011 22:33:50 -0500:
I'm setting up a computer that will run 'CentOS 6 server'.
Sure about that? This is your first experience with CentOS/RHEL?
But, I can't find any reference to this chip.
I can't tell you either. I think the support for this is coming from dmraid. A search for this should reveal more for you. Such drivers are usually not compiled in the kernel at all, maybe in the 2.4 days, there are various methods to use them as a kernel module. In general, people recommend using normal software RAID if you have only a fakeraid controller. And, a second "in general", if you want to find something out about CentOS, looking at kernel.org or for "recent kernels" won't help too much.
I suggest you first read a bit about CentOS on www.centos.org before you make your OS choice. Don't get me wrong, I don't want to discourage you, but you should *know* what you get, you shouldn't assume.
Kai
Kai Schaetzl wrote:
I'm setting up a computer that will run 'CentOS 6 server'.
Sure about that? This is your first experience with CentOS/RHEL?
It'll run zoneminder (to create a dvr for video surveillance). I've been using Cent5.3 (non-server) since it was released, and I used Fedora before that (since FC 5 days). Actually, I might go back to Fedora for my regular machine.
I decided on CentOS for the server for a couple reasons: * I'm sticking with RH-style distros (I even got my 80 year old non-computer savvy Mother to be comfortable with it!) * zoneminder supports CentOS (they have a guide to set it up on Cent).
Robert wrote:
You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID.
After my research, I'm realizing that linux doesn't quite support it. So, I'll probably do as you suggested.
Rudi wrote:
CentOS 6 hasn't been released yet.
Well, I meant when it gets released. :)
2011/1/30 Michael Klinosky mpk2@enter.net:
Robert wrote:
You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID.
After my research, I'm realizing that linux doesn't quite support it. So, I'll probably do as you suggested.
I don't know if "linux doesn't quite support it" is true, but nevertheless, even if Linux/CentOS had PERFECT support for it, you still shouldn't use it IMHO.
The whole point of RAID is to give some sort of protection against hardware (HDD) failures. Fakeraid is a proprietary software RAID solution, so if your motherboard suddently decides to die, how will you then get access to your data? You'll need another motherboard/system with a fakeraid compatible controller, but how will you know if the new fakeraid-based controller is compatible with your HDDs created with the old controller? How will you know if the RAID controller has the correct firmware? Your best bet is to buy exactly the same motherbord (if it's still available at that time) and put the same BIOS version on it as your old board had.
Using Linux software RAID, you'll get the same performance as fakeraid and you can plug your HDDs into any motherboard running Linux to access your data. Linux own implementation of software RAID was introduced in kernel 2.1 (somewhere around ~1997), so you can be fairly sure that the solution is well tested - something which is most likely not the case with a fakeraid controller with limited/partly missing Linux support.
The only valid reason to run fakeraid I can think of, is if you're going to run Windows on it.
Best regards Kenni
At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list centos@centos.org wrote:
Hello. I'm setting up a computer that will run 'CentOS 6 server'. The MB is an Asus with a hw raid controller (Promise PDC-20276), which I want to use in RAID-1 mode. I noted (from a MB website) that it also needs a driver
- which is probably why it's called a 'fakeraid'.
So, I've been trying to determine if any recent kernels support this chip. Using google.com/linux, I found lots of hits dating about 2002 - 04, and referencing the 2.4 kernel (which had the driver compiled into it). But, nothing newer.
I checked kernel.org and kernelnewbies.org - I see that raid-1 is supported. But, I can't find any reference to this chip.
How can I find out what drivers are compiled into a given kernel? Or, basically what hardware a given kernel supports?
Many of the SATA (so-called) hardware raid controllers are not really hardware raid controllers, they are 'fakeraid' and requires lots of software RAID logic. You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Jan 30, 2011, at 7:36 AM, Robert Heller wrote:
At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list centos@centos.org wrote:
Many of the SATA (so-called) hardware raid controllers are not really hardware raid controllers, they are 'fakeraid' and requires lots of software RAID logic. You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID.
The only caveat I can think of is if you wanted to BOOT off of the raid configuration. The BIOS wouldn't understand the Linux RAID implementation.
But for RAID 1, especially, you probably want a minimum of 3 drives. A boot drive with Linux, and the other 2 RAIDed together for speed. That way, the logic to handle the failure of one of the drives isn't on the drive that may have failed.
Of course, if it is the Linux drive that failed, you replace that (from backup?) and your data should all still be available.
On Sun, Jan 30, 2011 at 5:01 PM, Kevin K kevink1@fidnet.com wrote:
On Jan 30, 2011, at 7:36 AM, Robert Heller wrote:
At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list centos@centos.org wrote:
Many of the SATA (so-called) hardware raid controllers are not really hardware raid controllers, they are 'fakeraid' and requires lots of software RAID logic. You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID.
The only caveat I can think of is if you wanted to BOOT off of the raid configuration. The BIOS wouldn't understand the Linux RAID implementation.
But for RAID 1, especially, you probably want a minimum of 3 drives. A boot drive with Linux, and the other 2 RAIDed together for speed. That way, the logic to handle the failure of one of the drives isn't on the drive that may have failed.
Of course, if it is the Linux drive that failed, you replace that (from backup?) and your data should all still be available.
You can install Linux on software RAID1 :)
At Sun, 30 Jan 2011 09:01:56 -0600 CentOS mailing list centos@centos.org wrote:
On Jan 30, 2011, at 7:36 AM, Robert Heller wrote:
At Sat, 29 Jan 2011 22:33:50 -0500 CentOS mailing list centos@centos.org wrote:
Many of the SATA (so-called) hardware raid controllers are not really hardware raid controllers, they are 'fakeraid' and requires lots of software RAID logic. You are generally *better off* to *disable* the motherboard RAID controller and use native Linux software RAID.
The only caveat I can think of is if you wanted to BOOT off of the raid configuration. The BIOS wouldn't understand the Linux RAID implementation.
Not really a problem: make /boot its own RAID 1 set. The BIOS will boot off /dev/sda and Grub will read /dev/sda1 (typically) to load the kernal and init ramdisk. The Linux RAID1 superblock is at the *end* of the disk -- The ext2/3 superblock is in its normal place, where grub will see it. /dev/sda1 and /dev/sdb1 will be kept identical by the Linux RAID logic, so if /dev/sda dies, it can be pulled and /dev/sdb will become /dev/sda. You'll want to replicatate the boot loader install on /dev/sdb (eg grub-install ... /dev/sdb).
But for RAID 1, especially, you probably want a minimum of 3 drives. A boot drive with Linux, and the other 2 RAIDed together for speed. That way, the logic to handle the failure of one of the drives isn't on the drive that may have failed.
No, only two drives will be just fine. Even if one drive fails, you can still boot the RAID set in 'degraded' mode, and then add in the replacement disk to the running system. Make two partitions on each drive, a small one for /boot and the rest for everything else and make this second raid set a LVM volumn group and carve out swap, root (/), /home, etc. as LVM volumns.
That is what I have:
sauron.deepsoft.com% cat /proc/mdstat Personalities : [raid1] md0 : active raid1 sdb1[1] sda1[0] 1003904 blocks [2/2] [UU]
md1 : active raid1 sdb2[1] sda2[0] 155284224 blocks [2/2] [UU]
unused devices: <none>
sauron.deepsoft.com% df -h /boot/ Filesystem Size Used Avail Use% Mounted on /dev/md0 965M 171M 746M 19% /boot
sauron.deepsoft.com% sudo /usr/sbin/pvdisplay --- Physical volume --- PV Name /dev/md1 VG Name sauron PV Size 148.09 GB / not usable 768.00 KB Allocatable yes PE Size (KByte) 4096 Total PE 37911 Free PE 23 Allocated PE 37888 PV UUID ttB15B-3eWx-4ioj-TUvm-lAPM-z9rD-Prumee
sauron.deepsoft.com% df -h / /usr /var /home Filesystem Size Used Avail Use% Mounted on /dev/mapper/sauron-c5root 2.0G 905M 1.1G 47% / /dev/mapper/sauron-c5usr 9.9G 4.9G 4.5G 53% /usr /dev/mapper/sauron-c5var 4.0G 1.4G 2.5G 36% /var /dev/mapper/sauron-home 9.9G 8.7G 759M 93% /home
(I have a pile of other File Systems.)
Of course, if it is the Linux drive that failed, you replace that (from backup?) and your data should all still be available.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks.
I hadn't really looked in any of this for a few years since I used RAID to combine 2 smaller hard drives into one larger volume. At work, I'm either just a user of a remote server that uses netapp filers for storage, or am running more disposable installs on lower end systems (with 1 hard drive) that can be wiped and reinstalled easily.
On Sun, Jan 30, 2011 at 5:33 AM, Michael Klinosky mpk2@enter.net wrote:
Hello. I'm setting up a computer that will run 'CentOS 6 server'. The MB is an Asus with a hw raid controller (Promise PDC-20276), which I want to use in RAID-1 mode. I noted (from a MB website) that it also needs a driver
- which is probably why it's called a 'fakeraid'.
So, I've been trying to determine if any recent kernels support this chip. Using google.com/linux, I found lots of hits dating about 2002 - 04, and referencing the 2.4 kernel (which had the driver compiled into it). But, nothing newer.
I checked kernel.org and kernelnewbies.org - I see that raid-1 is supported. But, I can't find any reference to this chip.
How can I find out what drivers are compiled into a given kernel? Or, basically what hardware a given kernel supports?
CentOS 6 hasn't been released yet.
The card that you want to use isn't a real RAID card and uses the PC's CPU for RAID calculations so you're better of using Linux md software raid for this purpose.