Hello fellow sysadmins!
I've assembled a whitebox system with a SuperMicro motherboard, case, 8GB of memory and a single quad core Xeon processor.
I have two 9650SE-8LPML cards (8 ports each) in each server with 12 1TB SATA drives total. Three drives per "lane" on each card.
CentOS 5.2 x86_64.
I'm looking for advice on tuning this thing for performance. Especially for the role of performing as either an NFS datastore for VMware or an iSCSI one.
I set up two volumes (one for each card); one RAID6 and one RAID5. I used the default 64K block size and am trying various filesystems in tandem with it. I stumbled across the following recommendations on 3Ware's site:
echo "64" > /sys/block/sda/queue/max_sectors_kb blockdev --setra 16384 /dev/sda echo "512" > /sys/block/sda/queue/nr_requests
But am wondering if there are other things I should be looking at, including changing the IO scheduler. Any particular options I should use with filesystem creation to match up with my RAID block size?
I also noted that there is a newer 3Ware driver (2.26.08.004) available than the one that comes stock with CentOS 5.2 (2.26.02.008). Not sure if I can expect performance improvement by "upgrading", and I imagine I'd have to mess with my initrd file in any case or boot with a driver disk option and blacklist the built-in driver...
Thanks for any feedback!
Ray
Ray Van Dolson wrote:
Hello fellow sysadmins!
I've assembled a whitebox system with a SuperMicro motherboard, case, 8GB of memory and a single quad core Xeon processor.
I have two 9650SE-8LPML cards (8 ports each) in each server with 12 1TB SATA drives total. Three drives per "lane" on each card.
CentOS 5.2 x86_64.
I'm looking for advice on tuning this thing for performance. Especially for the role of performing as either an NFS datastore for VMware or an iSCSI one.
I set up two volumes (one for each card); one RAID6 and one RAID5. I used the default 64K block size and am trying various filesystems in tandem with it. I stumbled across the following recommendations on 3Ware's site:
echo "64" > /sys/block/sda/queue/max_sectors_kb blockdev --setra 16384 /dev/sda echo "512" > /sys/block/sda/queue/nr_requests
But am wondering if there are other things I should be looking at, including changing the IO scheduler. Any particular options I should use with filesystem creation to match up with my RAID block size?
I also noted that there is a newer 3Ware driver (2.26.08.004) available than the one that comes stock with CentOS 5.2 (2.26.02.008). Not sure if I can expect performance improvement by "upgrading", and I imagine I'd have to mess with my initrd file in any case or boot with a driver disk option and blacklist the built-in driver...
Thanks for any feedback!
Ray _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ray, I've had good performance from xfs with large filesystems. What kind of files are you looking to use, lots of smaller files or large media files??
On Thu, Jan 15, 2009 at 06:04:59PM -0500, Rob Kampen wrote:
Ray, I've had good performance from xfs with large filesystems. What kind of files are you looking to use, lots of smaller files or large media files??
I was leaning towards using XFS as well. We'll probably be handling a lot of large files (VMware datastore).
In my initial tests the iSCSI target (tgtd) appears to run quite quickly, whereas the NFS daemon gets bogged down and results in high system load.
I'm not yet sure if I should be usin tgtd[1] or IET (iSCSI Enterprise Target[2]).
Thanks, Ray
[1] http://stgt.berlios.de [2] http://iscsitarget.sourceforge.net/
Hi,
On Thu, Jan 15, 2009 at 18:12, Ray Van Dolson rayvd@bludgeon.org wrote:
I'm not yet sure if I should be using tgtd or IET (iSCSI Enterprise Target).
tgtd is already included as a technology preview in RHEL5 (RPM scsi-target-utils, you can install this one in CentOS 5 as well [this is what you're using, right?]) and the kernel-space component is included in the upstream Linux kernel from version 2.6.20. From these facts, I think there is a great chance that tgtd will be maintained and supported in RHEL6, which usually means that it will be kept stable and up to date. With that in mind, if I were to choose, I would tend to go towards tgtd if it worked for me.
On the other hand, what should drive your decision is what works best for you. You should probably test both under a workload similar to the one you expect to have in production and see if one or them is a clear winner in that case.
Let us know how that goes!
HTH, Filipe
On Thu, Jan 15, 2009 at 11:51:40PM -0500, Filipe Brandenburger wrote:
Hi,
On Thu, Jan 15, 2009 at 18:12, Ray Van Dolson rayvd@bludgeon.org wrote:
I'm not yet sure if I should be using tgtd or IET (iSCSI Enterprise Target).
tgtd is already included as a technology preview in RHEL5 (RPM scsi-target-utils, you can install this one in CentOS 5 as well [this is what you're using, right?]) and the kernel-space component is included in the upstream Linux kernel from version 2.6.20. From these facts, I think there is a great chance that tgtd will be maintained and supported in RHEL6, which usually means that it will be kept stable and up to date. With that in mind, if I were to choose, I would tend to go towards tgtd if it worked for me.
I've been doing a bit of reading today on iSCSI target solutions in the Linux world. It's a bit of a mess. :-)
I am already using STGT (tgtadm, tgtd) in CentOS 5.2. Has RH backported the prequisite kernel components already? I assume so, or could this thing be running entirely userland?
I'm also going to give LIO-Target a try as I understand it might give the best performance.
IET and SCST are other options out there.
On the other hand, what should drive your decision is what works best for you. You should probably test both under a workload similar to the one you expect to have in production and see if one or them is a clear winner in that case.
Let us know how that goes!
Thanks for the reply!
Ray
Hi,
about I/O sched, deadline gives better performance. echo "deadline" > /sys/block/sdX/queue/scheduler in /etc/rc.local will do the trick. Someone said to me that setting /proc/sys/vm/dirty_ratio (as a % of total ram) so that it fits into your 3ware cache could be a good idea. I have no numbers to show, though. HTH. Laurent
On Jan 15, 2009, at 6:12 PM, Ray Van Dolson rayvd@bludgeon.org wrote:
On Thu, Jan 15, 2009 at 06:04:59PM -0500, Rob Kampen wrote:
Ray, I've had good performance from xfs with large filesystems. What kind of files are you looking to use, lots of smaller files or large media files??
I was leaning towards using XFS as well. We'll probably be handling a lot of large files (VMware datastore).
In my initial tests the iSCSI target (tgtd) appears to run quite quickly, whereas the NFS daemon gets bogged down and results in high system load.
I'm not yet sure if I should be usin tgtd[1] or IET (iSCSI Enterprise Target[2]).
We over on the IET list would love to hear your experience with both and not just on performance but ease of use and reliability too.
The primary IET creator is also one of the primary TGT developers and I'm sure he'd like to hear an unbiased user's perspective.
-Ross