David Mackintosh wrote:
On Sun, Jun 29, 2008 at 09:08:15AM +0200, Rudi Ahlers wrote:
Hi all
I want to look at setting up a simple / cheap SAN / NAS server using normal PIV motherboard, 2GB (or even more) RAM, Core 2 Duo CPU (probably a Intel 6700 / 6750 / 6800) & some SATA HDD's (4 or 6x 320GB - 750GB). My budget is limited, so I can't afford a pre-built NAS device.
My own experience: I have done two NAS systems using CentOS. One is a HP DL585G1 with four 300GB drives using a hardware RAID-5. The second is a Dell PowerEdge 2600 with four 300GB drives (software raid-10) and two 32GB drives (software raid-1).
One has a multi-core Opteron processor, the other has a high-end Xeon processor with HT disabled. Both have 2GB of RAM.
Both are used by high-demand compute processes as NFS servers.
Despite a lot of fidding, configuring, testing and tuning, neither result is very good when it comes to NFS performance. We've gone so far as to run everything as noatime (ie local mount, nfs export, and nfs client mount) hoping for better performance.
In comparing the systems we tried the hardware-RAID5 first on the assumption that HW-Raid5 is faster than SW-Raid, for a higher yield than Raid-10. However we don't think that the elevator used in the kernel makes intelligent stepping decisions on the HW-Raid5 because it doesn't see the "real" geometry of the disks involved, only the aparrent geometry of the RAID5 disk.
The Software-Raid10 is better in some ways because the kernel sees the real disk geometries. Performance is about on par with the other computer, even though the other computer has the better CPU.
Due to the hardware involved I couldn't try Solaris 10, but we have had experiences in the past where the NFS server on Solaris was significantly better than the NFS server in CentOS/RedHat, both in terms of throughput and perceved latency under load.
If I was doing it again, I'd push harder for a budget for a NetApp filer. For what we are attempting to do, you get what you pay for.
If I was doing it again with the budget restrictions, I'd probably try Solaris with software raid. I would then try the *BSD family, but only after Solaris because I have extensive Solaris experience.
On Linux storage servers that use RAID try elevator=deadline for better io scheduling performance.
The default 'cfq' scheduler is really designed for single-disk interactive workstation io patterns.
-Ross
______________________________________________________________________ This e-mail, and any attachments thereto, is intended only for use by the addressee(s) named herein and may contain legally privileged and/or confidential information. If you are not the intended recipient of this e-mail, you are hereby notified that any dissemination, distribution or copying of this e-mail, and any attachments thereto, is strictly prohibited. If you have received this e-mail in error, please immediately notify the sender and permanently delete the original and any copy or printout thereof.