Les Mikesell wrote: > Have you investigated any of the mostly-software alternatives for this like > openfiler, nexentastor, etc., or rolling your own iscsi server out of > opensolaris or centos? I have and it depends on your needs. I ran Openfiler a couple years ago with ESX and it worked ok. The main issue there was stability. I landed on a decent configuration that worked fine as long as you didn't touch it(kernel updates often caused kernel panics on the hardware which was an older HP DL580). And when Openfiler finally came out with their newer "major" version the only upgrade path was to completely re-install the OS(maybe that's changed now I don't know). A second issue was availability, Openfiler(and others) have replication and clustering in some cases, but I've yet to see anything come close to what the formal commercial storage solutions can provide(seamless fail over, online software upgrades etc). Mirrored cache is also a big one as well. Storage can be the biggest pain point to address when dealing with a consolidated environment, since in many cases it remains a single point of failure. Network fault tolerance is fairly simple to address, and throwing more servers to take into account server failure is easy, but the data can often only live in one place at a time. Some higher end arrays offer synchronous replication to another system, though that replication is not application aware(aka crash consistent) so you are at some risk of data loss when using it with applications that are not aggressive about data integrity(like Oracle for example). A local vmware consulting shop here that I have a lot of respect for says in their experience, doing crash consistent replication of VMFS volumes between storage arrays there is about a 10% chance one of the VMs on the volume being replicated will not be recoverable, as a result they heavily promoted NetApp's VMware-aware replication which is much safer. My own vendor 3PAR released similar software a couple of weeks ago for their systems. Shared storage can also be a significant pain point for performance as well with a poor setup. Another advantage to a proper enterprise-type solution is support, mainly for firmware updates. My main array at work for example is using Seagate enterprise SATA drives. The vendor has updated the firmware on them twice in the past six months. So not only was the process made easy since it was automatic, but since it's their product they work closely with the manufacturer and are kept in the loop when important updates/fixes come out and have access to them, last I checked it was a very rare case to be able to get HDD firmware updates from the manufacturer's web sites. The system "worked" perfectly fine before the updates, I don't know what the most recent update was for but the one performed in August was around an edge case where silent data corruption could occur on the disk if a certain type of error condition was encountered, so the vendor sent out an urgent alert to all customers using the same type of drive to get them updated asap. A co-worker of mine had to update the firmware on some other Seagate disks(SCSI) in 2008 on about 50 servers due to a performance issue with our application, in that case he had to go to each system individually with a DOS boot disk and update the disks, a very time consuming process involving a lot of downtime. My company spent almost a year trying to track down the problem before I joined and ran some diagnostics and fairly quickly narrowed the problem down to systems running Seagate disks(some other systems running the same app had other brands(stupid dell), of disks that were not impacted). A lot of firmware update tools I suspect don't work well with RAID controllers either, since the disks are abstracted, further complicating the issue of upgrading them. So it all depends on what the needs are, you can go with the cheaper software options just try to set expectations accordingly when using them. Which for me is basically - "don't freak out when it blows up". nate