I need to do a new CentOS net install on a new server having the Supermicro X7DVL-3 motherboard:
http://www.supermicro.com/products/motherboard/xeon1333/5000V/X7DVL-3.cfm
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
I am a bit confused as others have reported CentOS 5.3 and 5.4 working "out of the box" with the same controller:
http://www.linux-archive.org/centos/287219-installing-centos-5-4-64bit-serve...
So I assumed that the megaraid_sas driver shipped with CentOS should support the controller?
What confuses me also is the output of lspci:
MegaRAID SAS 3208 ELP
Does this mean its not the 1068E controller inside but something else? Or CentOS misidentifies it?
So I assume the controller is not supported and I need a binary driver for it. For 1068e it should be:
http://www.lsi.com/storage_home/products_home/standard_product_ics/sas_ics/l...
and the driver I need is inside the file mptlinux-4.26.00.00-1-rhel5.5.x86_64.dd.gz
But how do I go applying this driver during installation as the server has no floppy drive?
And what happens if I get the driver installed and then the server's kernel is updated? Do I need reinstall the driver somehow?
Best regards, Peter
Peter Peltonen wrote:
I need to do a new CentOS net install on a new server having the Supermicro X7DVL-3 motherboard:
http://www.supermicro.com/products/motherboard/xeon1333/5000V/X7DVL-3.cfm
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
<snip> I recently had a problem like that with a Dell box. The trick is that with a hardware controller, it supercedes software RAID. What you need to do is go into the firmware controller configuration on boot, before you get to grub, and make sure everything's visible and correct. The controller can see the drives, but not present them to the o/s if you don't.
mark
Hi and thanks for your reply,
On Wed, Mar 9, 2011 at 6:33 PM, m.roth@5-cent.us wrote:
Peter Peltonen wrote:
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
<snip> I recently had a problem like that with a Dell box. The trick is that with a hardware controller, it supercedes software RAID. What you need to do is go into the firmware controller configuration on boot, before you get to grub, and make sure everything's visible and correct. The controller can see the drives, but not present them to the o/s if you don't.
Hmm, I am not sure if I understand you correctly: are you saying that in the firmware configuration there might be an option that makes the disks invisible for the OS? This sounds a bit strange and I wonder what such config could be...
Or are you suggesting that I should put the controller in "JBOD mode" and then use software RAiD instead of hardware RAID? I would not like to go with this option as I think the performance would suffer this way?
Regards, Peter
Peter Peltonen wrote:
Hi and thanks for your reply,
On Wed, Mar 9, 2011 at 6:33 PM, m.roth@5-cent.us wrote:
Peter Peltonen wrote:
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
<snip> I recently had a problem like that with a Dell box. The trick is that with a hardware controller, it supercedes software RAID. What you need
to do
is go into the firmware controller configuration on boot, before you
get to
grub, and make sure everything's visible and correct. The controller can see the drives, but not present them to the o/s if you don't.
Hmm, I am not sure if I understand you correctly: are you saying that in the firmware configuration there might be an option that makes the disks invisible for the OS? This sounds a bit strange and I wonder what such config could be...
Or are you suggesting that I should put the controller in "JBOD mode" and then use software RAiD instead of hardware RAID? I would not like to go with this option as I think the performance would suffer this way?
Nope. They may have said they "pre-installed the RAID, but you really need to go into the setup (<ctrl-c>, or -f, or whatever), and see what it presents ->logically<- (key buzzword). If it hasn't been initialized, or put into logical configuration, then it simply will not present the logical drives to the o/s, and AFAIK, it will *not* present the physical drives at all.
mark
On Wed, Mar 9, 2011 at 11:51 AM, m.roth@5-cent.us wrote:
Peter Peltonen wrote:
On Wed, Mar 9, 2011 at 6:33 PM, m.roth@5-cent.us wrote:
Peter Peltonen wrote:
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
I recently had a problem like that with a Dell box. The trick is that with a hardware controller, it supercedes software RAID. What you need
to do
is go into the firmware controller configuration on boot, before you
get to
grub, and make sure everything's visible and correct. The controller can see the drives, but not present them to the o/s if you don't.
Hmm, I am not sure if I understand you correctly: are you saying that in the firmware configuration there might be an option that makes the disks invisible for the OS? This sounds a bit strange and I wonder what such config could be...
Or are you suggesting that I should put the controller in "JBOD mode" and then use software RAiD instead of hardware RAID? I would not like to go with this option as I think the performance would suffer this way?
Nope. They may have said they "pre-installed the RAID, but you really need to go into the setup (<ctrl-c>, or -f, or whatever), and see what it presents ->logically<- (key buzzword). If it hasn't been initialized, or put into logical configuration, then it simply will not present the logical drives to the o/s, and AFAIK, it will *not* present the physical drives at all.
I think that it's ctrl-r and that you have to set up "virtual disks" using the "physical disks".
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
One other thing to check, which is rare but I've seen before with your symptoms, is a controller that's not listed in the driver's PCI ID list. The chip onboard is a 1068e but if Supermicro used a nonstandard PCI ID, the driver wouldn't recognize it.
On Wed, Mar 9, 2011 at 7:40 PM, Tom H tomh0665@gmail.com wrote:
On Wed, Mar 9, 2011 at 11:51 AM, m.roth@5-cent.us wrote:
Peter Peltonen wrote:
On Wed, Mar 9, 2011 at 6:33 PM, m.roth@5-cent.us wrote:
Peter Peltonen wrote:
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
I recently had a problem like that with a Dell box. The trick is that with a hardware controller, it supercedes software RAID. What you need
to do
is go into the firmware controller configuration on boot, before you
get to
grub, and make sure everything's visible and correct. The controller can see the drives, but not present them to the o/s if you don't.
Nope. They may have said they "pre-installed the RAID, but you really need to go into the setup (<ctrl-c>, or -f, or whatever), and see what it presents ->logically<- (key buzzword). If it hasn't been initialized, or put into logical configuration, then it simply will not present the logical drives to the o/s, and AFAIK, it will *not* present the physical drives at all.
I think that it's ctrl-r and that you have to set up "virtual disks" using the "physical disks".
Here are some pics of the RAID configuration:
http://www.knuka.org/raid1.jpg http://www.knuka.org/raid2.jpg
For me it seems that the drives are initialized and virtual disks setup, so it is not a hardware configuration issue?
I also received confirmation from the vendor that the controller is "LSI 1068E".
Should this controller be supported by CentOS5.5 without a driver disk?
Regards, Peter
Regards, Peter
Here are some pics of the RAID configuration:
http://www.knuka.org/raid1.jpg http://www.knuka.org/raid2.jpg
It does indeed look configured...
Peter Peltonen wrote on Thu, 10 Mar 2011 10:12:48 +0200:
Should this controller be supported by CentOS5.5 without a driver disk?
Yes, but maybe not in this specific card. For instance, I get this from lspci:
02:00.0 SCSI storage controller: LSI Logic / Symbios Logic SAS1068E PCI- Express Fusion-MPT SAS (rev 08)
See, it mentions MPT. The built-in MPT driver works fine with it. This one may need an LSI-provided MegaRAID driver. You should ask SuperMicro about this. Assuming that they support RHEL they should be able to tell you if you need exctra drivers or you can use the driver coming with the system.
Kai
On 3/9/2011 10:47 AM, Peter Peltonen wrote:
I recently had a problem like that with a Dell box. The trick is that with a hardware controller, it supercedes software RAID. What you need to do is go into the firmware controller configuration on boot, before you get to grub, and make sure everything's visible and correct. The controller can see the drives, but not present them to the o/s if you don't.
Hmm, I am not sure if I understand you correctly: are you saying that in the firmware configuration there might be an option that makes the disks invisible for the OS? This sounds a bit strange and I wonder what such config could be...
Some controllers want to map arrays to volumes and present the volumes to the OS instead of drives, so you have to go through the motions of assigning the resources to volumes and initializing them even if you only want one disk in the array or volume.
Or are you suggesting that I should put the controller in "JBOD mode" and then use software RAiD instead of hardware RAID? I would not like to go with this option as I think the performance would suffer this way?
Depending on the raid level there may or may not be a performance difference, but the point is you have to configure the controller and drives they way you want them before they will show up at all.
Hi,
On Wed, Mar 9, 2011 at 6:57 PM, Les Mikesell lesmikesell@gmail.com wrote:
Some controllers want to map arrays to volumes and present the volumes to the OS instead of drives, so you have to go through the motions of assigning the resources to volumes and initializing them even if you only want one disk in the array or volume.
I am pretty sure this was done already as that was what I had been told, and I remember seeing on the screen during the bootup messages about the drives being initialized and RAID5 working ok. But its been a while since I've been working with hardware issues so I will double check this tomorrow and show you the config.
So is it so that the LSI 1068E Controller *should* be supported by megaraid_sas driver and the net install should use it without any driver disk needed?
Regards, Peter
On 3/9/2011 12:10 PM, Peter Peltonen wrote:
Hi,
On Wed, Mar 9, 2011 at 6:57 PM, Les Mikeselllesmikesell@gmail.com wrote:
Some controllers want to map arrays to volumes and present the volumes to the OS instead of drives, so you have to go through the motions of assigning the resources to volumes and initializing them even if you only want one disk in the array or volume.
I am pretty sure this was done already as that was what I had been told, and I remember seeing on the screen during the bootup messages about the drives being initialized and RAID5 working ok. But its been a while since I've been working with hardware issues so I will double check this tomorrow and show you the config.
So is it so that the LSI 1068E Controller *should* be supported by megaraid_sas driver and the net install should use it without any driver disk needed?
Regards, Peter _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
go into the configuration of the card itself and make sure the raid array is not only configured./.but initialized and bootable. ONce it is seutp correctly it should get seen correctly. Megaraid is the "technical" name for jsut about all of it's controller chips..:)
Hmm, I am not sure if I understand you correctly: are you saying that in the firmware configuration there might be an option that makes the disks invisible for the OS?
Most controllers have a firmware you can enter at boot with a keystroke. Once in, you create/prepare arrays or single drives, which the OS can then see...
Hmm, I am not sure if I understand you correctly: are you saying That in the firmware configuration there might be an option that makes the disks invisible for the OS?
No, not as such. You just have to define the arrays: sssign the drives as needed. It's a rare thing that a factory will set up the controller and drives in a way that suits your needs.
I think you mentioned that Centos does see the controller, (but listing a different number) and isn't seeing the drives. Which is why myself and others are mentioning configuring the drives within the controller's bios.
The number Centos sees might just be the controllers chipset number rather than the controller's part number...
On 03/09/11 16:55, Peter Peltonen wrote:
I need to do a new CentOS net install on a new server having the Supermicro X7DVL-3 motherboard:
http://www.supermicro.com/products/motherboard/xeon1333/5000V/X7DVL-3.cfm
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller". We received the server with 3 drives + 1 spare as hw RAID-5 preinstalled. During bootup I see that the drives are initialised and everything seems ok.
The issue I am facing is that when trying to install CentOS no hard drives are recognised.
*snip*
Best regards, Peter
That controller doesn't really support RAID, what you're getting is commonly called FakeRAID. It basically helps the BIOS to boot from the RAID arrays you create but leaves the actual RAID calculations etc... to the driver.
Configure the board in IT mode (Initiator/Target). That will disable the FakeRAID. It's jumper JPA2 just above the SAS ports. Once you've done that, clean the drives (dd if=/dev/zero of=/dev/sd? bs=1M) so no signatures from the FakeRAID BIOS remain. After that install CentOS as you would normally and use software RAID (which is better anyway).
By the way, the X7DVL-3 is a pretty old board, you say this is a new server? I hope you didn't pay alot of money for it.
Glenn
RedShift wrote on Thu, 10 Mar 2011 17:05:02 +0100:
That controller doesn't really support RAID, what you're getting is commonly called FakeRAID
If you are referring to the 1068E, that is completely wrong.
Kai
On Thu, Mar 10, 2011 at 6:35 PM, Kai Schaetzl maillists@conactive.com wrote:
RedShift wrote on Thu, 10 Mar 2011 17:05:02 +0100:
That controller doesn't really support RAID, what you're getting is commonly called FakeRAID
If you are referring to the 1068E, that is completely wrong.
The vendor who built the server confirmed that the controller card should be LSI 1068E.
I found a driver for RHEL5.5 from LSI's page. And some more digging revealed that I should be able to use linux dd=url format for loading the driver from network as the machine has no floppy. I will test that next week when I get my hands on that machine again.
A few questions though I would like to get answered:
1) What is the best way to find out if a certain controller etc is supported by the kernel? Do I need to download the kernel src rpm, install it and look for some documentation/source somewhere?
2) If I need to use the binary driver by LSI, how do I proceed with kernel updates?
Best regards, Peter
Peter Peltonen wrote on Thu, 10 Mar 2011 19:08:14 +0200:
- If I need to use the binary driver by LSI, how do I proceed with
kernel updates?
The download is quite large and you will notice that there is a dkms rpm in it, amongst a lot of other stuff. dkms can cater with updates, AFAIK. It rebuilds the module if it detects a newer kernel or on every reboot or so (others will know better). For this you need to have the build requirements installed on the machine! dkms isn't the preferred method anymore, but I don't think that package contains a "weak update".
Kai
Google for "LSI M1068E". There are a few interesting postings on the first result page, all people with heavy problems to get it working. You have an M1068E which doesn't seem to be the same as 1068E, at least not on the firmware side. You *need* the MegaRAID driver. The built-in won't work.
Kai
On 03/10/11 8:35 AM, Kai Schaetzl wrote:
RedShift wrote on Thu, 10 Mar 2011 17:05:02 +0100:
That controller doesn't really support RAID, what you're getting is commonly called FakeRAID
If you are referring to the 1068E, that is completely wrong.
They do have basic hardware raid, with an embedded control processor, but they don't have any battery back write-back cache, which negates any real advantages of hardware raid. I always configure those as simple SAS controllers and use the OS native software raid (mdraid mirroring in the case of Linux). I also almost never use any raid level above raid1 or 10 (mirror or stripe/mirror).
On 3/10/2011 12:19 PM, John R Pierce wrote:
That controller doesn't really support RAID, what you're getting is commonly called FakeRAID
If you are referring to the 1068E, that is completely wrong.
They do have basic hardware raid, with an embedded control processor, but they don't have any battery back write-back cache, which negates any real advantages of hardware raid.
How important is the card-level battery if you have a UPS and a scheme to monitor it and do a graceful shutdown before it fails?
I always configure those as simple SAS controllers and use the OS native software raid (mdraid mirroring in the case of Linux). I also almost never use any raid level above raid1 or 10 (mirror or stripe/mirror).
I like raid 1 myself because you can recover data from any remaining disk after a failure and software raid because you can use any vendor's controller for that recovery, but if you use raid 5 you might want the hardware controller to do the parity computation work.
On 03/10/11 10:56 AM, Les Mikesell wrote:
How important is the card-level battery if you have a UPS and a scheme to monitor it and do a graceful shutdown before it fails?
how important is your data? if your system *never* crashes, and you're not running something like a database server dependent on committed writes, I'm sure you'd be fine.
me, personally, I've had to recover from DC total UPS failures. I like to put redundant power supplies on alternate UPS's, but with datacenter sized UPS's thats often not an option.
On 3/10/2011 1:01 PM, John R Pierce wrote:
On 03/10/11 10:56 AM, Les Mikesell wrote:
How important is the card-level battery if you have a UPS and a scheme to monitor it and do a graceful shutdown before it fails?
how important is your data? if your system *never* crashes, and you're not running something like a database server dependent on committed writes, I'm sure you'd be fine.
me, personally, I've had to recover from DC total UPS failures. I like to put redundant power supplies on alternate UPS's, but with datacenter sized UPS's thats often not an option.
Sure, UPS's fail, plugs get pulled, etc., but the cards and internal batteries most likely have their own failure modes. Or the whole box can fry at once. Did you have any way to tell if your battery-backed saved any data as the disks lost power or did the filesystems just back out the incomplete writes anyway?
On 03/10/11 11:36 AM, Les Mikesell wrote:
Sure, UPS's fail, plugs get pulled, etc., but the cards and internal batteries most likely have their own failure modes. Or the whole box can fry at once. Did you have any way to tell if your battery-backed saved any data as the disks lost power or did the filesystems just back out the incomplete writes anyway?
battery backed writeback caches on raid controllers flush any pending data to the disks when power is restored. if for some reason they can't, they flag an error
when an application (such as database server) or file system issues a fdatasync or fsync, it expects that when that operation returns success, all data has been committed to non-volatile storage. BBWC exist to speed up that critical operation, as actualy committing data to disk is slow and expensive. This is of particular importance to a transactional database server, each COMMIT; has to be committed to disk.
I am intentionally sidestepping the issue of cheap desktop grade storage that ignores buffer flush commands as these really aren't suitable for transactional database servers unless your data just isn't that important. IDE and SATA stuff has always been 'soft' on this, while SCSI, FC, and SAS drives are much more consistent.
On 3/10/2011 1:50 PM, John R Pierce wrote:
battery backed writeback caches on raid controllers flush any pending data to the disks when power is restored. if for some reason they can't, they flag an error
I know what they are supposed to do - I was just wondering if it happens in practice under real-world conditions.
when an application (such as database server) or file system issues a fdatasync or fsync, it expects that when that operation returns success, all data has been committed to non-volatile storage. BBWC exist to speed up that critical operation, as actualy committing data to disk is slow and expensive. This is of particular importance to a transactional database server, each COMMIT; has to be committed to disk.
But if you didn't just do the fsync (i.e. you are running just about anything but a transactional db), odds are that the directory update won't match the data and journal recovery will drop it anyway.
I am intentionally sidestepping the issue of cheap desktop grade storage that ignores buffer flush commands as these really aren't suitable for transactional database servers unless your data just isn't that important. IDE and SATA stuff has always been 'soft' on this, while SCSI, FC, and SAS drives are much more consistent.
I thought there were also problems in layers like lvm that keep the OS from knowing exactly what happened. And a lot of software that should fsync at certain points probably doesn't because linux has historically handled it badly.
On 03/10/11 12:40 PM, Les Mikesell wrote:
I thought there were also problems in layers like lvm that keep the OS from knowing exactly what happened. And a lot of software that should fsync at certain points probably doesn't because linux has historically handled it badly.
thats another problem entirely. both the MD and LVM layers of linux tend to drop write barriers which are supposed to ensure that key writes occur in the correct order. this is one reason we tend to run our mission critical database servers on Solaris or AIX rather than Linux.
On Mar 10, 2011, at 3:49 PM, John R Pierce pierce@hogranch.com wrote:
On 03/10/11 12:40 PM, Les Mikesell wrote:
I thought there were also problems in layers like lvm that keep the OS from knowing exactly what happened. And a lot of software that should fsync at certain points probably doesn't because linux has historically handled it badly.
thats another problem entirely. both the MD and LVM layers of linux tend to drop write barriers which are supposed to ensure that key writes occur in the correct order. this is one reason we tend to run our mission critical database servers on Solaris or AIX rather than Linux.
I think LVM respecting barriers is in RHEL6.
The lack of barrier support is mitigated by the battery backed write-back cache, as far as volatility is concerned, though barriers also preserve ordering which BBWBC doesn't guarantee, though advanced RAID controllers should support FUA (forced unit access) which allows properly written scsi subsystems to preserve ordering. An FUA will make sure all pending data is flushed to disk, then the data that the FUA covers is written direct to disk.
The barrier support was revised recently to only support FUA devices I believe because non-FUA based devices were too expensive (performance wise) to cludge barrier support for, so if your device doesn't do FUA then it's barriers are basically a no-op.
-Ross
On Mar 10, 2011, at 6:33 PM, Ross Walker rswwalker@gmail.com wrote:
On Mar 10, 2011, at 3:49 PM, John R Pierce pierce@hogranch.com wrote:
On 03/10/11 12:40 PM, Les Mikesell wrote:
I thought there were also problems in layers like lvm that keep the OS from knowing exactly what happened. And a lot of software that should fsync at certain points probably doesn't because linux has historically handled it badly.
thats another problem entirely. both the MD and LVM layers of linux tend to drop write barriers which are supposed to ensure that key writes occur in the correct order. this is one reason we tend to run our mission critical database servers on Solaris or AIX rather than Linux.
I think LVM respecting barriers is in RHEL6.
The lack of barrier support is mitigated by the battery backed write-back cache, as far as volatility is concerned, though barriers also preserve ordering which BBWBC doesn't guarantee, though advanced RAID controllers should support FUA (forced unit access) which allows properly written scsi subsystems to preserve ordering. An FUA will make sure all pending data is flushed to disk, then the data that the FUA covers is written direct to disk.
The barrier support was revised recently to only support FUA devices I believe because non-FUA based devices were too expensive (performance wise) to cludge barrier support for, so if your device doesn't do FUA then it's barriers are basically a no-op.
Let me correct myself that the drives need to support 'sync', FUA is a nice optional as it negates the need for sync-write-sync, but still for cheap drives that don't respect 'sync' it's a no-op where before it use to do a drain-stop (painfully slow).
-Ross
Peter Peltonen wrote on Wed, 9 Mar 2011 17:55:04 +0200:
http://www.supermicro.com/products/motherboard/xeon1333/5000V/X7DVL-3.cfm
Based on that info I assume the board having a "8x SAS Ports via LSI 1068E Controller".
Well, did you check at the LSI site for the controller/card that *is* detected (MegaRAID 3028)? Maybe it's not a 1068E. There is no mention of it anywhere just on that product page. Maybe that info is wrong. Also, you should be aware that the 1068E sits usually on a PCI-Express card. If that is not present or if you use the SATA ports on the MB that is *not* the 1068E!
Kai
I have now partially solved my problem:
On Wed, Mar 9, 2011 at 5:55 PM, Peter Peltonen peter.peltonen@gmail.com wrote:
I need to do a new CentOS net install on a new server having the Supermicro X7DVL-3 motherboard:
[...]
So I assume the controller is not supported and I need a binary driver for it. For 1068e it should be:
I received the driver image megasr-13.17.0421.2010-1-rhel50-u5-all.img from the hardware vendor and was able to use it as the driver disk for installation.
Upgrading the kernel issue is still unresolved though:
And what happens if I get the driver installed and then the server's kernel is updated? Do I need reinstall the driver somehow?
After updates the system is unable to boot with the new kernel as it cannot find the megasr driver.
What shoudl I do? Does the megasr module for the old kernel also work with the new kernel => do I need to copy it somewhere and create an initrd image including that module? Or do I need to find an updated megasr module from somewhere?
Best regards, Peter
Hi Peter,
I too was very undecided about using the LSI 1068e (on-board many supermicro boards) in production for this very reason. The problem is that chipset is basically unsupported by LSI and updates to it are sometimes necessary to maintain compatibility. The driver won't build sometimes on updated kernels forcing you to hold onto a particular version. Even if you could get away from that, the driver is black box and we found it to be somewhat unreliable (read: crashy). I believe even in IT mode that chipset requires a different driver (not megasr, mptsas?) and was also black box.
We decided after a few update mistakes and trips to the datacenter, it wasn't worth it. I suggest instead buying a separate megaraid card (highly recommend the 92XX series) which are very well supported by the open source, in-kernel driver (megaraid_sas? megasas?). I don't believe the 1068e is a wise choice for new installations and the 4i variants run as low as $180 new.
We currently run, in production, the X8DT3 with a 9240-8i card and skipped the onboard.
Brandon
On Wed, Mar 16, 2011 at 8:19 AM, Peter Peltonen peter.peltonen@gmail.comwrote:
I have now partially solved my problem:
On Wed, Mar 9, 2011 at 5:55 PM, Peter Peltonen peter.peltonen@gmail.com wrote:
I need to do a new CentOS net install on a new server having the Supermicro X7DVL-3 motherboard:
[...]
So I assume the controller is not supported and I need a binary driver for it. For 1068e it should be:
I received the driver image megasr-13.17.0421.2010-1-rhel50-u5-all.img from the hardware vendor and was able to use it as the driver disk for installation.
Upgrading the kernel issue is still unresolved though:
And what happens if I get the driver installed and then the server's kernel is updated? Do I need reinstall the driver somehow?
After updates the system is unable to boot with the new kernel as it cannot find the megasr driver.
What shoudl I do? Does the megasr module for the old kernel also work with the new kernel => do I need to copy it somewhere and create an initrd image including that module? Or do I need to find an updated megasr module from somewhere?
Best regards, Peter _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
thus Peter Peltonen spake:
I have now partially solved my problem:
On Wed, Mar 9, 2011 at 5:55 PM, Peter Peltonen peter.peltonen@gmail.com wrote:
I need to do a new CentOS net install on a new server having the Supermicro X7DVL-3 motherboard:
[...]
So I assume the controller is not supported and I need a binary driver for it. For 1068e it should be:
I received the driver image megasr-13.17.0421.2010-1-rhel50-u5-all.img from the hardware vendor and was able to use it as the driver disk for installation.
Upgrading the kernel issue is still unresolved though:
And what happens if I get the driver installed and then the server's kernel is updated? Do I need reinstall the driver somehow?
After updates the system is unable to boot with the new kernel as it cannot find the megasr driver.
What shoudl I do? Does the megasr module for the old kernel also work with the new kernel => do I need to copy it somewhere and create an initrd image including that module? Or do I need to find an updated megasr module from somewhere?
I had this hardware, too, from a customer of ours.
http://blog.uguu.ru/tag/piece-of-shit/
It's not my blog but it's exactly what reality ist like.
Cheers,
Timo
Best regards, Peter