On Jan 11, 2009, at 1:06 PM, Stewart Williams <lists at pinkyboots.co.uk> wrote: > Ross Walker wrote: >> On Jan 11, 2009, at 10:13 AM, Stewart Williams >> <lists at pinkyboots.co.uk> wrote: >> >>> William Warren wrote: >>>> Stewart Williams wrote: >>>>> I have just purchased an HP ProLiant HP ML110 G5 server and >>>>> install ed >>>>> CentOS 5.2 x86_64 on it. >>>>> >>>>> It has the following spec: >>>>> >>>>> Intel(R) Xeon(R) CPU 3065 @ 2.33GHz >>>>> 4GB ECC memory >>>>> 4 x 250GB SATA hard disks running at 1.5GB/s >>>>> >>>>> Onboard RAID controller is enabled but at the moment I have used >>>>> mdadm >>>>> to configure the array. >>>>> >>>>> RAID bus controller: Intel Corporation 82801 SATA RAID Controller >>>>> >>>>> For a simple striped array I ran: >>>>> >>>>> # mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 / >>>>> dev/sdc1 >>>>> # mke2fs -j /dev/md0 >>>>> # mount -t ext3 /dev/md0 /mnt >>>>> >>>>> Attached are the results of 2 bonnie++ tests I made to test the >>>>> performance: >>>>> >>>>> # bonnie++ -s 256m -d /mnt -u 0 -r 0 >>>>> >>>>> and >>>>> >>>>> # bonnie++ -s 1g -d /mnt -u 0 -r 0 >>>>> >>>>> I also tried 3 of the drives in a RAID 5 setup with gave similar >>>>> results. >>>>> >>>>> Is it me or are the results poor? >>>>> >>>>> Is this the best I can expect from the hardware or is something >>>>> wrong? >>>>> >>>>> I would appreciate any advice or possible tweaks I can make to the >>>>> system to make the performance better. >>>>> >>>>> The block I/O is the thing that concerns me as mostly I am >>>>> serving a >>>>> 650MB file via samba to 5 clients and I think this is where I need >>>>> the >>>>> speed. >>>>> >>>>> Plus I am hoping to run some virtualised guests on it eventually, >>>>> but >>>>> nothing too heavy. >>>>> >>>>> --- >>>>> --- >>>>> ------------------------------------------------------------------ >>>>> >>>>> _______________________________________________ >>>>> CentOS mailing list >>>>> CentOS at centos.org >>>>> http://lists.centos.org/mailman/listinfo/centos >>>> That onbard raid is fakeraid..so when you dialup raid 5 you >>>> effectivly >>>> put hte hdd's in pio mode since ALL data has to be routed through >>>> your >>>> cpu. Please get a raid card from HP or go get a 3ware card so you >>>> ahve >>>> real hardware raid. >>>> >>>> fake and real raid chpsets: >>>> http://linuxmafia.com/faq/Hardware/sata.html >>>> >>>> Why using fakeraid at all is bad: >>>> http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html >>>> >>>> MDM under linux is kernel raid that does not use a binary >>>> driver..however you don't want to do ANY software raid 5. >>> Thanks William, >>> >>> I am no expert on RAID, so you have opened my eyes to somethings I >>> wasn't aware of. >>> >>> I am considering disabling the onboard RAID in the BIOS and >>> re-installing CentOS and configuring the 4 drives as RAID 10 just to >>> see >>> what the performance is like. >>> >>> Or I may purchase a card as you advise. Would I benefit from >>> buying a >>> SCSI/or SAS card and drives for my requirements? Basically the main >>> role >>> of the machine is to serve a ~600MB file via samba to 5 Windows XP >>> cient >>> PC's on a gigabit network. >> >> If all your doing is serving a single file to a handful of PCs then a >> 2 drive mirror will be more then enough. > > That is what I currently have setup on the old server, but it only has > 1GB ram and AMD Duron 1300MHz CPU. > > The performance on the clients gets slower as the file size grows and > now it has got very slow - hence the new server. Sounds like the file is getting more and more fragmented and the io is turning into random io over it. Once a week disable access to the file, copy it to a new name then move it back over the top of the old one and that'll defrag it. >> You should stick with the OS RAID though as the onboard RAID will >> bring nothing but pain. > > That is what I have read. So understood :-) > >> >> For sequential IO expect 60MB/s read and 40MB/s write (with the >> drive's write cache enabled) per drive. Random IO is an order of >> magnitude less. > > Should that be OK for my needs or for the clients to be happy should I > be wanting more? what figure should I be looking at? That's what to expect with standard file io operations (4k) some apps use larger ios so they will get better throughput (backups 64k, video editing 128k+) which can max the network throughput (115MB/s on Gbe). >> > Sorry for all the questions and thanks for the Not a problem, that's what the lists are for! -Ross