I need to install CentOS 5.2 on a IBM Server xSeries 226, which comes with a IBM SERVERAID 6i RAID card. I think it is not a true hardware RAID card. I has, nevertheless, an interesting feature: 128MB of cache with battery backup.
I launched the CentOS boot DVD and CentOS correctly identified the card and the RAID 5 array as configured by the controller's BIOS.
My question now is: what would be the better way to implement RAID 5 on this server? Should I use the detected array and respective driver or should I delete the array and go for Linus Software RAID?
If both solutions are in fact Software RAID, is there any particular reason to prefer one of the methods?
I know that Linux RAID will create a universal, more compatible array, readable on any Linux machine. But is there some other reason that makes it preferrable to use the SERVERAID driver provided by CentOS? Is it optimized in any way that recommends its use?
Will the controller still make use of its cache and battery backup if configured as a plain SCSI controller with Linux Software RAID?
I hope that some more experienced list member can ellucidate me on this.
Thank you!
PS - The machine is powered by a Intel P4 Xeon processor served by 2.5GB of RAM. The disks are 3 IBM 10K rpm SCSI 320 with 73 GB each.
On Tue, 18 Nov 2008, miguelmedalha@sapo.pt wrote:
My question now is: what would be the better way to implement RAID 5 on this server? Should I use the detected array and respective driver or should I delete the array and go for Linus Software RAID?
I don't know about throughput, but using software RAID has the following plus'es (in my book) 1) no need for the vendor specific agents to monitor 2) if/when you get larger drives, you can sub them into the array and then expand it. I haven't seen a hardware RAID card yet that will allow that
---------------------------------------------------------------------- Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
miguelmedalha@sapo.pt wrote:
I know that Linux RAID will create a universal, more compatible array, readable on any Linux machine. But is there some other reason that makes it preferrable to use the SERVERAID driver provided by CentOS? Is it optimized in any way that recommends its use?
I always prefer hardware raid over software raid primarily for hot swap purposes, if a drive is dead I just yank it, replace it and don't have to worry about it, rebuild is automatic. This may be the case with linux software raid now I'm not sure, a few years ago hot swap was somewhat iffy and results varied widely on the controller(e.g. could panic/hang the box in some situations).
Also root on raid is much simpler with hardware raid than software raid. If you have battery backed cache as yours appears to have, you have the added advantage of write back caching which can give higher performance.
Out of the 400 or so raid cards I've used over the years I recall only 2, maybe 3 of them having trouble(failing/faulty).
So in short I always prefer hardware raid because it is simpler to operate for me at least. That is provided it is a true hardware raid controller, there are a lot of shit hybrid software/hardware raid cards out there, I do not trust them.
nate
miguelmedalha@sapo.pt wrote:
I need to install CentOS 5.2 on a IBM Server xSeries 226, which comes with a IBM SERVERAID 6i RAID card. I think it is not a true hardware RAID card. I has, nevertheless, an interesting feature: 128MB of cache with battery backup.
I launched the CentOS boot DVD and CentOS correctly identified the card and the RAID 5 array as configured by the controller's BIOS.
My question now is: what would be the better way to implement RAID 5 on this server? Should I use the detected array and respective driver or should I delete the array and go for Linus Software RAID?
If both solutions are in fact Software RAID, is there any particular reason to prefer one of the methods?
I know that Linux RAID will create a universal, more compatible array, readable on any Linux machine. But is there some other reason that makes it preferrable to use the SERVERAID driver provided by CentOS? Is it optimized in any way that recommends its use?
Will the controller still make use of its cache and battery backup if configured as a plain SCSI controller with Linux Software RAID?
I hope that some more experienced list member can ellucidate me on this.
Thank you!
PS - The machine is powered by a Intel P4 Xeon processor served by 2.5GB of RAM. The disks are 3 IBM 10K rpm SCSI 320 with 73 GB each. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi Miguel,
FWIW I have been running an IBM e server 346 with 6 73GB disks in two RAID volumes using the SERVERAID drivers for 3 years with no problems - performance is mucj better than my development system that used single - non-rad disks for various partitions.
ChrisG
miguelmedalha@sapo.pt wrote:
I need to install CentOS 5.2 on a IBM Server xSeries 226, which comes with a IBM SERVERAID 6i RAID card. I think it is not a true hardware RAID card. I has, nevertheless, an interesting feature: 128MB of cache with battery backup.
It's a real raid adapter and the linux kernel is able to handle it with the standard ips module
I launched the CentOS boot DVD and CentOS correctly identified the card and the RAID 5 array as configured by the controller's BIOS.
My question now is: what would be the better way to implement RAID 5 on this server? Should I use the detected array and respective driver or should I delete the array and go for Linus Software RAID?
It's always up to you to decide but i'd prefer using the hw raid controller in that case of course ...
If both solutions are in fact Software RAID, is there any particular reason to prefer one of the methods?
If you use the hw raid, you can easily manager your raid array with either ipssend cli or ServeRAID manager gui (both are downloadable from IBM support website)
I know that Linux RAID will create a universal, more compatible array, readable on any Linux machine. But is there some other reason that makes it preferrable to use the SERVERAID driver provided by CentOS? Is it optimized in any way that recommends its use?
Will the controller still make use of its cache and battery backup if configured as a plain SCSI controller with Linux Software RAID?
I hope that some more experienced list member can ellucidate me on this.
Hope it's done now ;-) PS : i've installed dozens of CentOS/RHEL on IBM machines without any problems
It's a real raid adapter and the linux kernel is able to handle it with the standard ips module
I found some references in Google that seemed to indicate a "fakeraid" controller, one that depends on the driver to do the RAID calculations...
It's always up to you to decide but i'd prefer using the hw raid controller in that case of course ...
My fear is that in case of controller failure it will be difficult to find in time a controller that recognizes the particular format of the array, or that it will be too expensive. As an example, I recently tried to buy the second processor for the machine (it originally came with only one) and wanted more than 1500 dollars (yes!) for the P4 Xeon 3GHz CPU.
I am thankful for you for answers!
On Tue, Nov 18, 2008 at 9:43 PM, Miguel Medalha miguelmedalha@sapo.pt wrote:
It's a real raid adapter and the linux kernel is able to handle it with the standard ips module
I found some references in Google that seemed to indicate a "fakeraid" controller, one that depends on the driver to do the RAID calculations...
All ServeRAID adapters from IBM are real hardware-based RAID cards. The only exception are some cards then end with "e" in their name. But all "ServerRAID Xi" cards are real HW RAID. Anyway, see here "http://www.redbooks.ibm.com/abstracts/tips0054.html" for a complete overview of all IBM ServeRAID cards.
I've run a lot of machines with these type of ServeRAIDs and they all worked fine.
Regards, Tim
Fabian Arrotin wrote:
If you use the hw raid, you can easily manager your raid array with either ipssend cli or ServeRAID manager gui (both are downloadable from IBM support website)
Faultdetection sucks with that, but you can use ipmitool for that:
[root@on3-3550 ~]# ipmitool sdr|grep Drive Drive 1 Status | 0x01 | ok Drive 2 Status | 0x01 | ok Drive 3 Status | Not Readable | ns Drive 4 Status | Not Readable | ns [root@on3-3550 ~]#
Cheers,
Ralph
Ralph Angenendt wrote:
Fabian Arrotin wrote:
If you use the hw raid, you can easily manager your raid array with either ipssend cli or ServeRAID manager gui (both are downloadable from IBM support website)
Faultdetection sucks with that, but you can use ipmitool for that:
[root@on3-3550 ~]# ipmitool sdr|grep Drive Drive 1 Status | 0x01 | ok Drive 2 Status | 0x01 | ok Drive 3 Status | Not Readable | ns Drive 4 Status | Not Readable | ns [root@on3-3550 ~]#
I installed the OpenIPMI packages but when I run the above command I get the following:
(geppetto pts5) # ipmitool sdr Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory Get Device ID command failed Unable to open SDR for reading (geppetto pts6) # rpm -qa | grep -i ipmi OpenIPMI-libs-2.0.6-6.el5 OpenIPMI-2.0.6-6.el5 OpenIPMI-tools-2.0.6-6.el5 (geppetto pts6) # lsmod | grep -i ipmi (geppetto pts6) #
Is there something special that needs to be done to enable ipmitool? lsmod shows that none of the ipmi modules are loaded. Do I need some magic incantation in /etc/modules.conf or is there some other package I need?
the machine is an ibm X3655 with C5 installed on it.
Regards,
Tom Diehl wrote:
I installed the OpenIPMI packages but when I run the above command I get the following:
(geppetto pts5) # ipmitool sdr Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory Get Device ID command failed Unable to open SDR for reading
service ipmi start
And turn it on:
chkconfig ipmi on
Cheers,
Ralph
Ralph Angenendt wrote:
Tom Diehl wrote:
I installed the OpenIPMI packages but when I run the above command I get the following:
(geppetto pts5) # ipmitool sdr Could not open device at /dev/ipmi0 or /dev/ipmi/0 or /dev/ipmidev/0: No such file or directory Get Device ID command failed Unable to open SDR for reading
service ipmi start
And turn it on:
chkconfig ipmi on
That will do it. I did not realize it was a daemon. DUH!!
Thanks for the help.
Regards,
miguelmedalha@sapo.pt wrote:
My question now is: what would be the better way to implement RAID 5 on this server? Should I use the detected array and respective driver or should I delete the array and go for Linus Software RAID?
I've installed RHEL 4 on several IBM eSeries servers with ServeRaid controllers and I despise them. They fail too often and often don't tell you that they are having problems until it is too late. My suggestion is to use Linux software for your RAID array, and bypass the ServeRaid controller entirely.
Ian
I've installed RHEL 4 on several IBM eSeries servers with ServeRaid controllers and I despise them. They fail too often and often don't tell you that they are having problems until it is too late. My suggestion is to use Linux software for your RAID array, and bypass the ServeRaid controller entirely.
Are you referring to the SATA controllers or to the SCSI ones?