I have two identical servers. The only difference is that the first one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched) CPU: dual Opteron 280 Memory: 16GB Raid card: 3ware 9550Sx-8LP Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular, but good enough. The second one has about 1/3 the write speed. I can't find any difference between the systems. Both of them have the same stripe size, both have ext3 filesystems, both have write caching and NCQ turned on. I have already increased the read ahead setting to 16384 on both servers.
I ran the tests like this:
# sync; bonnie++ -d /iotest -s 50g -n 0 -b -f (I have removed some extra information from the reports for brevity)
And here are the results for the two servers:
------Output------- --Input-- --Block-- -Rewrite- --Block-- --Seeks-- Machine Size K/sec %CP K/sec %CP K/sec %CP /sec %CP First 50G 62893 25 46763 12 160672 19 120.6 1 Second 50G 18835 7 44025 12 194719 24 122.8 1
As you can see, the write performance of the second server is terrible. Anyone have any suggestions of what I can look for? I keep thinking there must be something I tweaked on the first server that I forgot about for the second one, but so far I haven't been able to find it.
Any suggestions appreciated!
On Thu, 12 Oct 2006 at 4:22pm, Bowie Bailey wrote
I have two identical servers. The only difference is that the first one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched) CPU: dual Opteron 280 Memory: 16GB Raid card: 3ware 9550Sx-8LP Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular, but good enough. The second one has about 1/3 the write speed. I can't find any difference between the systems. Both of them have the same stripe size, both have ext3 filesystems, both have write caching and NCQ turned on. I have already increased the read ahead setting to 16384 on both servers.
Turn off NCQ. Last I knew, this was still 3ware's recommendation.
I saw a definite improvement by turning off NCQ and setting StorSave to 'Balanced.' Are these 1.5GB/Sec or 3.0GB/Sec SATA drives? During my testing I changed from non-interleaved memory and 1.5GB to interleaved and 3.0GB. Made a big difference in bonnie++ results. Unfortunately, I can't say which was more important.
If you have the patience, read through my recent (but lengthy) thread on the 3Ware 9550 titled "Calling All FS Fanatics." There's a lot of good info from many helpful people. I've only gotten full performance using JFS or XFS.
Kirk Bocek
Bowie Bailey wrote:
I have two identical servers. The only difference is that the first one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched) CPU: dual Opteron 280 Memory: 16GB Raid card: 3ware 9550Sx-8LP Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular, but good enough. The second one has about 1/3 the write speed. I can't find any difference between the systems. Both of them have the same stripe size, both have ext3 filesystems, both have write caching and NCQ turned on. I have already increased the read ahead setting to 16384 on both servers.
I ran the tests like this:
# sync; bonnie++ -d /iotest -s 50g -n 0 -b -f (I have removed some extra information from the reports for brevity)
And here are the results for the two servers:
------Output------- --Input-- --Block-- -Rewrite- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP /sec %CP First 50G 62893 25 46763 12 160672 19 120.6 1 Second 50G 18835 7 44025 12 194719 24 122.8 1
As you can see, the write performance of the second server is terrible. Anyone have any suggestions of what I can look for? I keep thinking there must be something I tweaked on the first server that I forgot about for the second one, but so far I haven't been able to find it.
Any suggestions appreciated!
Kirk Bocek wrote:
I saw a definite improvement by turning off NCQ and setting StorSave to 'Balanced.' Are these 1.5GB/Sec or 3.0GB/Sec SATA drives? During my testing I changed from non-interleaved memory and 1.5GB to interleaved and 3.0GB. Made a big difference in bonnie++ results. Unfortunately, I can't say which was more important.
If you have the patience, read through my recent (but lengthy) thread on the 3Ware 9550 titled "Calling All FS Fanatics." There's a lot of good info from many helpful people. I've only gotten full performance using JFS or XFS.
I'm doing this from memory as the machine is at another location now. I think I did this:
turned off ncq (per Josh's suggestion) turned on write caching (it's on an oversized ups and the data isn't critical) set storsave to "performance" changed the memory interleave (thanks to kirk's suggestion). It was off by default. used parted to create gpt disklabel set noatime and one other option that was suggested here for the RAID partition used mke2fs -j -b 4096 /dev/blah
That was it.
Also, I tried the same array as a RAID0 device on the same box and the performance was approximately the same (maybe ever so slightly faster).
Cheers,
chrism@imntv.com wrote:
I'm doing this from memory as the machine is at another location now. I think I did this:
turned off ncq (per Josh's suggestion) turned on write caching (it's on an oversized ups and the data isn't critical) set storsave to "performance" changed the memory interleave (thanks to kirk's suggestion). It was off by default. used parted to create gpt disklabel set noatime and one other option that was suggested here for the RAID partition used mke2fs -j -b 4096 /dev/blah
That was it.
Okay, now try installing kernel-module-xfs and xfsutils (located in the centosplus repository), run mkfs.xfs on some unused space, mount and re-run bonnie++.
Kirk Bocek wrote:
chrism@imntv.com wrote:
I'm doing this from memory as the machine is at another location now. I think I did this:
turned off ncq (per Josh's suggestion) turned on write caching (it's on an oversized ups and the data isn't critical) set storsave to "performance" changed the memory interleave (thanks to kirk's suggestion). It was off by default. used parted to create gpt disklabel set noatime and one other option that was suggested here for the RAID partition used mke2fs -j -b 4096 /dev/blah
That was it.
Okay, now try installing kernel-module-xfs and xfsutils (located in the centosplus repository), run mkfs.xfs on some unused space, mount and re-run bonnie++.
To what end? I'd absolutely NEVER use it in production that way. That's *plenty* fast for me.
Cheers,
chrism@imntv.com wrote:
To what end? I'd absolutely NEVER use it in production that way. That's *plenty* fast for me.
On the current system I built, the best I could get from ext3 on a 4-drive raid-5 array was 95MB/Sec writes. With XFS I'm getting 220MB/Sec+. For a media server, I want the extra speed.
Kirk Bocek wrote:
chrism@imntv.com wrote:
To what end? I'd absolutely NEVER use it in production that way. That's *plenty* fast for me.
On the current system I built, the best I could get from ext3 on a 4-drive raid-5 array was 95MB/Sec writes. With XFS I'm getting 220MB/Sec+. For a media server, I want the extra speed.
Well, you saw my results. I'm happy with the speed and this is a server for fondling uncompressed video. So while I want to maximize the speed, I'm not going to take "unnecessarily foolish" risks with a filesystem that (at least to me) doesn't seem like it's quite reliable enough.
Seems like the answer to your problem is just to buy 4 more disks. :)
Cheers,
chrism@imntv.com wrote:
Well, you saw my results. I'm happy with the speed and this is a server for fondling uncompressed video. So while I want to maximize the speed, I'm not going to take "unnecessarily foolish" risks with a filesystem that (at least to me) doesn't seem like it's quite reliable enough. Seems like the answer to your problem is just to buy 4 more disks. :)
I guess I misunderstood your posts. I thought you were asking how to optimize write speeds. My bad.
If you have the patience, read through my recent (but lengthy) thread on the 3Ware 9550 titled "Calling All FS Fanatics." There's a lot of good info from many helpful people. I've only gotten full performance using JFS or XFS.
Dear god man! War and Peace is a shorter read if he's starting from scratch.....
Jim Perrin wrote:
If you have the patience, read through my recent (but lengthy) thread on the 3Ware 9550 titled "Calling All FS Fanatics." There's a lot of good info from many helpful people. I've only gotten full performance using JFS or XFS.
Dear god man! War and Peace is a shorter read if he's starting from scratch.....
aw come on! Don't make it sound that bad please!
Feizhou wrote:
Jim Perrin wrote:
If you have the patience, read through my recent (but lengthy) thread on the 3Ware 9550 titled "Calling All FS Fanatics." There's a lot of good info from many helpful people. I've only gotten full performance using JFS or XFS.
Dear god man! War and Peace is a shorter read if he's starting from scratch.....
aw come on! Don't make it sound that bad please!
Well, if it quacks like a duck....
:)
BEDEVERE: What also floats in water? VILLAGER #1: Bread! VILLAGER #2: Apples! VILLAGER #3: Uh, very small rocks!
http://www.mwscomp.com/movies/grail/grail-05.htm
(I suspect my response is somewhat off-thread...)
Jim Perrin wrote:
Well, if it quacks like a duck....
Then it's not a witch and we don't have to burn it.
Bowie Bailey spake the following on 10/12/2006 1:22 PM:
I have two identical servers. The only difference is that the first one has Maxtor 250G drives and the second one has Seagate 320G drives.
OS: CentOS-4.4 (fully patched) CPU: dual Opteron 280 Memory: 16GB Raid card: 3ware 9550Sx-8LP Raid volume: 4-disk Raid 5 with NCQ and Write Cache enabled
On the first server I have decent performance. Nothing spectacular, but good enough. The second one has about 1/3 the write speed. I can't find any difference between the systems. Both of them have the same stripe size, both have ext3 filesystems, both have write caching and NCQ turned on. I have already increased the read ahead setting to 16384 on both servers.
I ran the tests like this:
# sync; bonnie++ -d /iotest -s 50g -n 0 -b -f (I have removed some extra information from the reports for brevity)
And here are the results for the two servers:
------Output------- --Input-- --Block-- -Rewrite- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP /sec %CP First 50G 62893 25 46763 12 160672 19 120.6 1 Second 50G 18835 7 44025 12 194719 24 122.8 1
As you can see, the write performance of the second server is terrible. Anyone have any suggestions of what I can look for? I keep thinking there must be something I tweaked on the first server that I forgot about for the second one, but so far I haven't been able to find it.
Any suggestions appreciated!
Is there a difference in the write performance, speed, or cache size of the drives? You didn't list specifics of the drives, so I had to ask.