I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
Plus I am hoping to run some virtualised guests on it eventually, but nothing too heavy.
Version 1.94 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP localhost 256M 512 98 74517 19 107776 19 1451 99 +++++ +++ 3752 16 Latency 80784us 7458us 50us 6489us 201us 5670us Version 1.94 ------Sequential Create------ --------Random Create-------- localhost -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 24866 67 +++++ +++ +++++ +++ 26588 73 +++++ +++ +++++ +++ Latency 9668us 139us 328us 13409us 14us 339us 1.93c,1.94,localhost,1,1231622612,256M,,512,98,74517,19,107776,19,1451,99,++ +++,+++,3752,16,16,,,,,24866,67,+++++,+++,+++++,+++,26588,73,+++++,+++,+++++,+++ ,80784us,7458us,50us,6489us,201us,5670us,9668us,139us,328us,13409us,14us,339us
---
Version 1.94 ------Sequential Output------ --Sequential Input- --Random- Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP localhost 1G 506 98 55430 13 53909 8 1431 98 1414238 97 1484 9 Latency 75827us 562ms 36504us 11904us 329us 5926us Version 1.94 ------Sequential Create------ --------Random Create-------- localhost -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete-- files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP 16 21333 55 +++++ +++ +++++ +++ 25931 70 +++++ +++ +++++ +++ Latency 23775us 167us 173us 11528us 26us 97us 1.93c,1.94,localhost,1,1231623061,1G,,506,98,55430,13,53909,8,1431,98,1414238,97,1484,9,16,,,,,21333,55,+++++,+++,+++++,+++,25931,70,+++++,+++,+++++,+++,75827us,562ms,36504us,11904us,329us,5926us,23775us,167us,173us,11528us,26us,97us
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
that is essentially desktop grade disk IO
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
its rather hard to read your bonnie output logs as they aren't very columnar. but it appears the sequetial read speed at least is really high.
i'm seeing 55MB/sec random(block) and 1.4GB/sec sequential reads on the 1GB file, so I dunno what your issues are... of course, a 1GB file sits entirely in the system cache assuming a reasonable amount of otherwise idle memory
John R Pierce wrote:
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
that is essentially desktop grade disk IO
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
I'm not sure, how can I find this out?
its rather hard to read your bonnie output logs as they aren't very columnar. but it appears the sequetial read speed at least is really high.
i'm seeing 55MB/sec random(block) and 1.4GB/sec sequential reads on the 1GB file,
Correct.
so I dunno what your issues are... of course, a 1GB file sits entirely in the system cache assuming a reasonable amount of otherwise idle memory
I'm not sure whether the performance would suffice as I've not tried putting it in production.
I am going to benchmark the old server (currently in production) that this is replacing.
Thanks,
Stewart
Stewart Williams wrote:
John R Pierce wrote:
Stewart Williams wrote:
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
I'm not sure, how can I find this out?
well, what is the nature of the application thats using this file? do you really mean just a single 650MB (sub 1GB) file? is this something like a quickbooks file? (thats kind of what it sounds like from your other answers).
given what you have now, and what information we've been given, I would
A) disable BIOS raid, configuring it for JBOD w/ ACHI enabled B) mdadm mirror disks 0 and 1, and put the OS on that C) mdadm mirror disks 2 and 3, and put your shared SMB filesystem on that
John R Pierce wrote:
Stewart Williams wrote:
John R Pierce wrote:
Stewart Williams wrote:
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
I'm not sure, how can I find this out?
well, what is the nature of the application thats using this file? do you really mean just a single 650MB (sub 1GB) file? is this something like a quickbooks file? (thats kind of what it sounds like from your other answers).
It is indeed a quickbooks file. The file is currently at 645MB and grows about 20MB per week.
given what you have now, and what information we've been given, I would
A) disable BIOS raid, configuring it for JBOD w/ ACHI enabled B) mdadm mirror disks 0 and 1, and put the OS on that C) mdadm mirror disks 2 and 3, and put your shared SMB filesystem on
that
Is that a better choice than RAID 1+0?
Stewart Williams wrote:
John R Pierce wrote:
Stewart Williams wrote:
John R Pierce wrote:
Stewart Williams wrote:
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
I'm not sure, how can I find this out?
well, what is the nature of the application thats using this file? do you really mean just a single 650MB (sub 1GB) file? is this something like a quickbooks file? (thats kind of what it sounds like from your other answers).
It is indeed a quickbooks file. The file is currently at 645MB and grows about 20MB per week.
given what you have now, and what information we've been given, I would
A) disable BIOS raid, configuring it for JBOD w/ ACHI enabled B) mdadm mirror disks 0 and 1, and put the OS on that C) mdadm mirror disks 2 and 3, and put your shared SMB filesystem on
that
Is that a better choice than RAID 1+0?
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Hi, I too run quickbooks (2007) and offer the following scenario - 5 user licences (actually 2 times three user package were purchased). Previously I used version 2004 and this allows much better sharing of the data file, unfortunately I got sucked into an upgrade that in reality was a significant downgrade!! Anyway - my set-up:-/ For multiple simultaneous users, one machine has to be the defacto "server", i.e. it opens the file and shares access to the underlying data store on behalf of other users. (why they can't develop a decent client server product defies understanding). So what I have done is establish a W2K client running in virtualbox (thanks to Sun for keeping this product FOSS). This client accesses the data file from my main server (running a HW based raid 5 disk array). I have lots of ram on my virtual box client, and allocate sufficient to ensure all is in ram. Thus far the system has been very robust and no data loss. I keep this client running in share mode 24x7 and only go single user to create backups. Unfortunately Quickbooks does not provide an automated method of backing up (another gross over-sight).
All the other users (on Windoze XP at this time) access the virtualbox W2K for the data file. Performance while not stellar is adequate, my file is only 10% the size of yours, but it runs basically from ram, and only save writes.... Apparently Quickbooks do offer more expensive products that may work better from a client server perspective but only on Windoze and MAC, but for my small business the cost is WAY TOO HIGH and I love FOSS and Linux.
I must say, it took me many dozens of hours to get this working properly (mostly due to my ignorance, and Quickbooks poor design), so I hope this may assist you.
Hi Rob,
Rob Kampen wrote:
Hi, I too run quickbooks (2007) and offer the following scenario - 5 user licences (actually 2 times three user package were purchased). Previously I used version 2004 and this allows much better sharing of the data file, unfortunately I got sucked into an upgrade that in reality was a significant downgrade!!
Yeah I know exactly what you mean. We are currently on 2005 Pro and got "sucked into" upgrading from 2003 Pro, which was working fine for us; but 2005 did have a couple of features we liked the sound of. Little did we know that 2005 was a zillion times slower than 2003 (in our experience) and once you convert your data file and work with it for a week or so, adding information, there is no way back and the data you've added since is too precious to loose, that you can't afford to revert to a week old backup. If only the files were version independent.
Anyway - my set-up:-/ For multiple simultaneous users, one machine has to be the defacto "server", i.e. it opens the file and shares access to the underlying data store on behalf of other users. (why they can't develop a decent client server product defies understanding).
I've always been annoyed by this too, as it has never really been made into a proper networkable application. They also say that the company file should never reach to a size greater than 125MB. Ha! no chance.
So what I have done is establish a W2K client running in virtualbox (thanks to Sun for keeping this product FOSS). This client accesses the data file from my main server (running a HW based raid 5 disk array). I have lots of ram on my virtual box client, and allocate sufficient to ensure all is in ram. Thus far the system has been very robust and no data loss. I keep this client running in share mode 24x7 and only go single user to create backups. Unfortunately Quickbooks does not provide an automated method of backing up (another gross over-sight).
I can't really see the benefit in this, samba shares the company file just as well as Windows as long as you configure the permissions correctly and set oplock settings in smb.conf
That said, I have read, when spending endless hours googling for tips on running QB from a server, that it can be best to serve it from a Windows box and intuit only supports that method.
All the other users (on Windoze XP at this time) access the virtualbox W2K for the data file. Performance while not stellar is adequate, my file is only 10% the size of yours, but it runs basically from ram, and only save writes.... Apparently Quickbooks do offer more expensive products that may work better from a client server perspective but only on Windoze and MAC, but for my small business the cost is WAY TOO HIGH and I love FOSS and Linux.
I think the performance difference would be far worse in this configuration with the size of our file.
I must say, it took me many dozens of hours to get this working properly (mostly due to my ignorance, and Quickbooks poor design), so I hope this may assist you.
Thanks for your reply Rob.
It's a shame that there are these issues, as it's an excellent program and suit's our needs perfectly in every other way than that mentioned.
And like you, I'd rather use FOSS and GNU/Linux. And that's not through cost, but through choice!
Stewart
Hi Stuart
Stewart Williams wrote:
Hi Rob,
Rob Kampen wrote:
Hi, I too run quickbooks (2007) and offer the following scenario - 5 user licences (actually 2 times three user package were purchased). Previously I used version 2004 and this allows much better sharing of the data file, unfortunately I got sucked into an upgrade that in reality was a significant downgrade!!
Yeah I know exactly what you mean. We are currently on 2005 Pro and got "sucked into" upgrading from 2003 Pro, which was working fine for us; but 2005 did have a couple of features we liked the sound of. Little did we know that 2005 was a zillion times slower than 2003 (in our experience) and once you convert your data file and work with it for a week or so, adding information, there is no way back and the data you've added since is too precious to loose, that you can't afford to revert to a week old backup. If only the files were version independent.
You too, sounds exactly like what happened to me.
Anyway - my set-up:-/ For multiple simultaneous users, one machine has to be the defacto "server", i.e. it opens the file and shares access to the underlying data store on behalf of other users. (why they can't develop a decent client server product defies understanding).
I've always been annoyed by this too, as it has never really been made into a proper networkable application. They also say that the company file should never reach to a size greater than 125MB. Ha! no chance.
So what I have done is establish a W2K client running in virtualbox (thanks to Sun for keeping this product FOSS). This client accesses the data file from my main server (running a HW based raid 5 disk array). I have lots of ram on my virtual box client, and allocate sufficient to ensure all is in ram. Thus far the system has been very robust and no data loss. I keep this client running in share mode 24x7 and only go single user to create backups. Unfortunately Quickbooks does not provide an automated method of backing up (another gross over-sight).
I can't really see the benefit in this, samba shares the company file just as well as Windows as long as you configure the permissions correctly and set oplock settings in smb.conf
That said, I have read, when spending endless hours googling for tips on running QB from a server, that it can be best to serve it from a Windows box and intuit only supports that method.
All the other users (on Windoze XP at this time) access the virtualbox W2K for the data file. Performance while not stellar is adequate, my file is only 10% the size of yours, but it runs basically from ram, and only save writes.... Apparently Quickbooks do offer more expensive products that may work better from a client server perspective but only on Windoze and MAC, but for my small business the cost is WAY TOO HIGH and I love FOSS and Linux.
I think the performance difference would be far worse in this configuration with the size of our file.
I must say, it took me many dozens of hours to get this working properly (mostly due to my ignorance, and Quickbooks poor design), so I hope this may assist you.
Thanks for your reply Rob.
It's a shame that there are these issues, as it's an excellent program and suit's our needs perfectly in every other way than that mentioned.
And like you, I'd rather use FOSS and GNU/Linux. And that's not through cost, but through choice!
Stewart _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I too used the samba share method with QB2004 as this allowed any of the machines to access the file in multi-user mode, what's more I could do automatic back-ups of the samba file system each night and have a working backup. However QB2007 does not allow this at all, only a single user could open the file and the other users had to access via that machine, hence the elaborate scheme I now use. My memory fails me as to all the convoluted things I tried, to get it working like it did under QB2004, I was not fit to live with for about a week when this hit around Christmas 2006. QB does my payroll as well and while not the cheapest, it just works. I only need and use the basic functions of QB. Just be very careful if you upgrade again, I got sucked in as I use turbo-tax and they only allow data transfer from QB <= three years old. From now on I'll just manually do the turbo-tax data input.... learned the hard way. I was MD of a UK based software development company, also spent many years in a large corporation heading IT strategy and have seen the issues of proprietary software vs FOSS, there's no going back for me. Hope you get something that works ok for your situation. I note that at 20Mb per week growth, you will probably have other issues heading your way real soon.
John R Pierce wrote:
Stewart Williams wrote:
John R Pierce wrote:
Stewart Williams wrote:
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
I'm not sure, how can I find this out?
well, what is the nature of the application thats using this file? do you really mean just a single 650MB (sub 1GB) file? is this something like a quickbooks file? (thats kind of what it sounds like from your other answers).
given what you have now, and what information we've been given, I would
A) disable BIOS raid, configuring it for JBOD w/ ACHI enabled B) mdadm mirror disks 0 and 1, and put the OS on that C) mdadm mirror disks 2 and 3, and put your shared SMB filesystem on
that
I disabled RAID in the BIOS (with SATA native mode set to auto), and the CentOS install runs fine to start with, then I setup my RAID configuration in disk-druid, continue and the system virtually grinds to a halt when it get's to the formating filesytem stage; the progress bar goes up a little then stops. The mouse cursor moves and the disk light flashes frequently but nothing is happening (even after hours). I can't switch to a debugging VT as the system is so un-responisve.
However, if I enable the RAID in the BIOS again, and proceed with the above, the install finishes fine.
Upon POST it says 4 drives JBOD.
The only thing that concerns me is that the drives are running at best speeds and not in PIO mode as William stated. I assume as I have created the array with mdadm and not HP's config utility that this is not the case, but just want some clarification.
CentOS loads the ahci driver and it looks OK to me. Here is my dmesg output:
scsi0 : ahci scsi1 : ahci scsi2 : ahci scsi3 : ahci ata1: SATA max UDMA/133 abar m2048@0xec000800 port 0xec000900 irq 233 ata2: SATA max UDMA/133 abar m2048@0xec000800 port 0xec000980 irq 233 ata3: SATA max UDMA/133 abar m2048@0xec000800 port 0xec000a00 irq 233 ata4: SATA max UDMA/133 abar m2048@0xec000800 port 0xec000a80 irq 233 ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata1.00: ATA-7: GB0250C8045, HPG1, max UDMA/133 ata1.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata1.00: configured for UDMA/133 ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata2.00: ATA-7: MAXTOR STM3250310AS, 4.AAA, max UDMA/133 ata2.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata2.00: configured for UDMA/133 ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata3.00: ATA-7: MAXTOR STM3250310AS, 3.AAF, max UDMA/133 ata3.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata3.00: configured for UDMA/133 ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata4.00: ATA-7: MAXTOR STM3250310AS, 4.AAA, max UDMA/133 ata4.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata4.00: configured for UDMA/133
Also, it shows 2 of the ports are running as 3.0Gbps (which is strange as I thought all of the ports on the mainboard were the same 1.5Gbps speed)
The 3 maxtor drives were purchased as SATA2 spec.
Is there a way to tell what ata?.00 corresponds to sd[a-d]? So that I can RAID the 2 faster drives together. Or does the kernel assign in order (e.g. ata2.00 = sdb)? Or won't it make much difference?
Thanks
Stewart
Stewart Williams schrieb: ...
ata1: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata1.00: ATA-7: GB0250C8045, HPG1, max UDMA/133 ata1.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata1.00: configured for UDMA/133 ata2: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata2.00: ATA-7: MAXTOR STM3250310AS, 4.AAA, max UDMA/133 ata2.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata2.00: configured for UDMA/133 ata3: SATA link up 1.5 Gbps (SStatus 113 SControl 300) ata3.00: ATA-7: MAXTOR STM3250310AS, 3.AAF, max UDMA/133 ata3.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata3.00: configured for UDMA/133 ata4: SATA link up 3.0 Gbps (SStatus 123 SControl 300) ata4.00: ATA-7: MAXTOR STM3250310AS, 4.AAA, max UDMA/133 ata4.00: 488397168 sectors, multi 16: LBA48 NCQ (depth 31/32) ata4.00: configured for UDMA/133
Also, it shows 2 of the ports are running as 3.0Gbps (which is strange as I thought all of the ports on the mainboard were the same 1.5Gbps speed)
The 3 maxtor drives were purchased as SATA2 spec.
Is there a way to tell what ata?.00 corresponds to sd[a-d]? So that I
you can identify them easily using e.g. "smartctl -a -d ata /dev/sda" or "hdparm -i /dev/sda". Those commands return the model and firmware-revision which you then can match against ata1-ata4
can RAID the 2 faster drives together. Or does the kernel assign in order (e.g. ata2.00 = sdb)? Or won't it make much difference?
it won't make much difference as long as you put two of the three STM3250310AS together (preferredly those with the presumably newer 4.AAA firmware). You may be able to switch the two 1.5Gbps drives into 3.0 Gbps mode by using vendor tools, but individual drive speeds are below 1.5Gbps anyway.
HTH,
Kay
Stewart Williams wrote:
John R Pierce wrote:
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
that is essentially desktop grade disk IO
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
is this a sequential or random access application thats using this file? is it read only/mostly, or is it random update?
I'm not sure, how can I find this out?
its rather hard to read your bonnie output logs as they aren't very columnar. but it appears the sequetial read speed at least is really high.
i'm seeing 55MB/sec random(block) and 1.4GB/sec sequential reads on the 1GB file,
Correct.
so I dunno what your issues are... of course, a 1GB file sits entirely in the system cache assuming a reasonable amount of otherwise idle memory
I'm not sure whether the performance would suffice as I've not tried putting it in production.
I am going to benchmark the old server (currently in production) that this is replacing.
Thanks,
Stewart
I've ran the same bonnie++ test on my old server using a 1GB file.
The machine has only 1GB ram and 2 IDE ATA100 hard disks in a RAID 1 mirror on seperate IDE channels on the motherboard.
I got about 38MB/s write w/ 13% CPU and 80MB/s read w/ 97% CPU. Also if I watch `top` with only one user, and run a quick-report in quickbooks on a stock item, the iowait is about 50% and the cached ram fills to around 200-300MB. So with 5 users I expect it that to go up quite dramatically.
On Sat, 10 Jan 2009 at 10:46pm, Stewart Williams wrote
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory
To actually test disk performance, you need to use a filesize of at least 2X (and preferably 4X) memory size. Otherwise you're just testing memory performance.
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
Plus I am hoping to run some virtualised guests on it eventually, but nothing too heavy.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That onbard raid is fakeraid..so when you dialup raid 5 you effectivly put hte hdd's in pio mode since ALL data has to be routed through your cpu. Please get a raid card from HP or go get a 3ware card so you ahve real hardware raid.
fake and real raid chpsets: http://linuxmafia.com/faq/Hardware/sata.html
Why using fakeraid at all is bad: http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html
MDM under linux is kernel raid that does not use a binary driver..however you don't want to do ANY software raid 5.
William Warren wrote:
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 /dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
Plus I am hoping to run some virtualised guests on it eventually, but nothing too heavy.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That onbard raid is fakeraid..so when you dialup raid 5 you effectivly put hte hdd's in pio mode since ALL data has to be routed through your cpu. Please get a raid card from HP or go get a 3ware card so you ahve real hardware raid.
fake and real raid chpsets: http://linuxmafia.com/faq/Hardware/sata.html
Why using fakeraid at all is bad: http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html
MDM under linux is kernel raid that does not use a binary driver..however you don't want to do ANY software raid 5.
Thanks William,
I am no expert on RAID, so you have opened my eyes to somethings I wasn't aware of.
I am considering disabling the onboard RAID in the BIOS and re-installing CentOS and configuring the 4 drives as RAID 10 just to see what the performance is like.
Or I may purchase a card as you advise. Would I benefit from buying a SCSI/or SAS card and drives for my requirements? Basically the main role of the machine is to serve a ~600MB file via samba to 5 Windows XP cient PC's on a gigabit network.
Stewart
On Jan 11, 2009, at 10:13 AM, Stewart Williams lists@pinkyboots.co.uk wrote:
William Warren wrote:
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 / dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
Plus I am hoping to run some virtualised guests on it eventually, but nothing too heavy.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That onbard raid is fakeraid..so when you dialup raid 5 you effectivly put hte hdd's in pio mode since ALL data has to be routed through your cpu. Please get a raid card from HP or go get a 3ware card so you ahve real hardware raid.
fake and real raid chpsets: http://linuxmafia.com/faq/Hardware/sata.html
Why using fakeraid at all is bad: http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html
MDM under linux is kernel raid that does not use a binary driver..however you don't want to do ANY software raid 5.
Thanks William,
I am no expert on RAID, so you have opened my eyes to somethings I wasn't aware of.
I am considering disabling the onboard RAID in the BIOS and re-installing CentOS and configuring the 4 drives as RAID 10 just to see what the performance is like.
Or I may purchase a card as you advise. Would I benefit from buying a SCSI/or SAS card and drives for my requirements? Basically the main role of the machine is to serve a ~600MB file via samba to 5 Windows XP cient PC's on a gigabit network.
If all your doing is serving a single file to a handful of PCs then a 2 drive mirror will be more then enough.
You should stick with the OS RAID though as the onboard RAID will bring nothing but pain.
For sequential IO expect 60MB/s read and 40MB/s write (with the drive's write cache enabled) per drive. Random IO is an order of magnitude less.
-Ross
Am 11.01.2009 um 18:36 schrieb Ross Walker:
If all your doing is serving a single file to a handful of PCs then a 2 drive mirror will be more then enough.
Actually, he could put it on a swap-backed tmpfs and serve it directly from RAM.
Seeing that he has 4GB of it...
If it's really only a single file, he could serve it via NGINX ;-)
cheers, Rainer
On Jan 11, 2009, at 12:49 PM, Rainer Duffner rainer@ultra-secure.de wrote:
Am 11.01.2009 um 18:36 schrieb Ross Walker:
If all your doing is serving a single file to a handful of PCs then a 2 drive mirror will be more then enough.
Actually, he could put it on a swap-backed tmpfs and serve it directly from RAM.
That would end badly if the OS crashed...
For reads the whole file would end up in page cache right away anyways and would get re-cached right after a write.
Seeing that he has 4GB of it...
If it's really only a single file, he could serve it via NGINX ;-)
NGINX? I'll have to google that one...
-Ross
Ross Walker wrote:
On Jan 11, 2009, at 10:13 AM, Stewart Williams lists@pinkyboots.co.uk wrote:
William Warren wrote:
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 / dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
Plus I am hoping to run some virtualised guests on it eventually, but nothing too heavy.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That onbard raid is fakeraid..so when you dialup raid 5 you effectivly put hte hdd's in pio mode since ALL data has to be routed through your cpu. Please get a raid card from HP or go get a 3ware card so you ahve real hardware raid.
fake and real raid chpsets: http://linuxmafia.com/faq/Hardware/sata.html
Why using fakeraid at all is bad: http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html
MDM under linux is kernel raid that does not use a binary driver..however you don't want to do ANY software raid 5.
Thanks William,
I am no expert on RAID, so you have opened my eyes to somethings I wasn't aware of.
I am considering disabling the onboard RAID in the BIOS and re-installing CentOS and configuring the 4 drives as RAID 10 just to see what the performance is like.
Or I may purchase a card as you advise. Would I benefit from buying a SCSI/or SAS card and drives for my requirements? Basically the main role of the machine is to serve a ~600MB file via samba to 5 Windows XP cient PC's on a gigabit network.
If all your doing is serving a single file to a handful of PCs then a 2 drive mirror will be more then enough.
That is what I currently have setup on the old server, but it only has 1GB ram and AMD Duron 1300MHz CPU.
The performance on the clients gets slower as the file size grows and now it has got very slow - hence the new server.
You should stick with the OS RAID though as the onboard RAID will bring nothing but pain.
That is what I have read. So understood :-)
For sequential IO expect 60MB/s read and 40MB/s write (with the drive's write cache enabled) per drive. Random IO is an order of magnitude less.
Should that be OK for my needs or for the clients to be happy should I be wanting more? what figure should I be looking at?
-Ross
Sorry for all the questions and thanks for the help.
Stewart
On Jan 11, 2009, at 1:06 PM, Stewart Williams lists@pinkyboots.co.uk wrote:
Ross Walker wrote:
On Jan 11, 2009, at 10:13 AM, Stewart Williams lists@pinkyboots.co.uk wrote:
William Warren wrote:
Stewart Williams wrote:
I have just purchased an HP ProLiant HP ML110 G5 server and install ed CentOS 5.2 x86_64 on it.
It has the following spec:
Intel(R) Xeon(R) CPU 3065 @ 2.33GHz 4GB ECC memory 4 x 250GB SATA hard disks running at 1.5GB/s
Onboard RAID controller is enabled but at the moment I have used mdadm to configure the array.
RAID bus controller: Intel Corporation 82801 SATA RAID Controller
For a simple striped array I ran:
# mdadm --create /dev/md0 --level=0 --raid-devices=2 /dev/sdb1 / dev/sdc1 # mke2fs -j /dev/md0 # mount -t ext3 /dev/md0 /mnt
Attached are the results of 2 bonnie++ tests I made to test the performance:
# bonnie++ -s 256m -d /mnt -u 0 -r 0
and
# bonnie++ -s 1g -d /mnt -u 0 -r 0
I also tried 3 of the drives in a RAID 5 setup with gave similar results.
Is it me or are the results poor?
Is this the best I can expect from the hardware or is something wrong?
I would appreciate any advice or possible tweaks I can make to the system to make the performance better.
The block I/O is the thing that concerns me as mostly I am serving a 650MB file via samba to 5 clients and I think this is where I need the speed.
Plus I am hoping to run some virtualised guests on it eventually, but nothing too heavy.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That onbard raid is fakeraid..so when you dialup raid 5 you effectivly put hte hdd's in pio mode since ALL data has to be routed through your cpu. Please get a raid card from HP or go get a 3ware card so you ahve real hardware raid.
fake and real raid chpsets: http://linuxmafia.com/faq/Hardware/sata.html
Why using fakeraid at all is bad: http://thebs413.blogspot.com/2005/09/fake-raid-fraid-sucks-even-more-at.html
MDM under linux is kernel raid that does not use a binary driver..however you don't want to do ANY software raid 5.
Thanks William,
I am no expert on RAID, so you have opened my eyes to somethings I wasn't aware of.
I am considering disabling the onboard RAID in the BIOS and re-installing CentOS and configuring the 4 drives as RAID 10 just to see what the performance is like.
Or I may purchase a card as you advise. Would I benefit from buying a SCSI/or SAS card and drives for my requirements? Basically the main role of the machine is to serve a ~600MB file via samba to 5 Windows XP cient PC's on a gigabit network.
If all your doing is serving a single file to a handful of PCs then a 2 drive mirror will be more then enough.
That is what I currently have setup on the old server, but it only has 1GB ram and AMD Duron 1300MHz CPU.
The performance on the clients gets slower as the file size grows and now it has got very slow - hence the new server.
Sounds like the file is getting more and more fragmented and the io is turning into random io over it.
Once a week disable access to the file, copy it to a new name then move it back over the top of the old one and that'll defrag it.
You should stick with the OS RAID though as the onboard RAID will bring nothing but pain.
That is what I have read. So understood :-)
For sequential IO expect 60MB/s read and 40MB/s write (with the drive's write cache enabled) per drive. Random IO is an order of magnitude less.
Should that be OK for my needs or for the clients to be happy should I be wanting more? what figure should I be looking at?
That's what to expect with standard file io operations (4k) some apps use larger ios so they will get better throughput (backups 64k, video editing 128k+) which can max the network throughput (115MB/s on Gbe).
Sorry for all the questions and thanks for the
Not a problem, that's what the lists are for!
-Ross
Stewart Williams wrote:
I am no expert on RAID, so you have opened my eyes to somethings I wasn't aware of.
I am considering disabling the onboard RAID in the BIOS and re-installing CentOS and configuring the 4 drives as RAID 10 just to see what the performance is like.
Yes, unless you have an expensive hardware raid with its own cpu and buffers on board, software raid 1 or 10 is better. I like to stick to raid1 where practical size-wise so that you can recover data from any single disk and you can keep seeks on different filesystems from competing with each other.
Or I may purchase a card as you advise. Would I benefit from buying a SCSI/or SAS card and drives for my requirements? Basically the main role of the machine is to serve a ~600MB file via samba to 5 Windows XP cient PC's on a gigabit network.
After the 1st read, that should live entirely in RAM cache and the speed you can serve it won't be limited to the underlying disk except for writes that eventually have to flush back. At least until you consume the RAM doing something else.