Joseph L. Casale wrote:
802.3ad info LACP rate: slow
Just a thought but try changing the lacp rate to fast?
http://www.cyberciti.biz/howto/question/static/linux-ethernet-bonding-driver...
lacp_rate
Option specifying the rate in which we'll ask our link partner to transmit LACPDU packets in 802.3ad mode. Possible values are:
slow or 0 Request partner to transmit LACPDUs every 30 seconds
fast or 1 Request partner to transmit LACPDUs every 1 second
The default is slow.
nate
802.3ad info LACP rate: slow
Just a thought but try changing the lacp rate to fast?
Nate, you rock! jlc
Chiming in a bit late on this one.
Was wondering if you messed around with diff bonding modes?
I found mode 5 and 6 to be very problematic while mode 0 to be fastest.
Setting up 802.3ad didn't seem to yield anything more than what a single link could provide based on monitoring via iptraf.
Whats your experience with bonding?
Hoping to get others viewpoints.
Setting up 802.3ad didn't seem to yield anything more than what a single link could provide based on monitoring via iptraf.
Whats your experience with bonding?
You can't ever get more traffic on an aggregate than what one link will do to a single host. Specifically, one conversation always flows over one link, what bonding does is either standby's a link for HA, or distributes conversations across many to see global throughput as a higher value.
So, no, you won't see "speed" improvements with a tool like iperf in the sense I suspect you are trying to see it.
What you could do is have 8 1gig hosts send/receive nearly 1gig of data simultaneously to a single server with a 8 interface aggregate if setup appropriately. (Other factors apply, but this is generally how it is)
Setting up 802.3ad didn't seem to yield anything more than what a single link could provide based on monitoring via iptraf.
Whats your experience with bonding?
You can't ever get more traffic on an aggregate than what one link will do to a single host. Specifically, one conversation always flows over one link, what bonding does is either standby's a link for HA, or distributes conversations across many to see global throughput as a higher value.
So, no, you won't see "speed" improvements with a tool like iperf in the sense I suspect you are trying to see it.
What you could do is have 8 1gig hosts send/receive nearly 1gig of data simultaneously to a single server with a 8 interface aggregate if setup appropriately. (Other factors apply, but this is generally how it is)
Ah, so the fact the mode 0 being round robin is why I see a gain from a single host.
Very cool info, thanks Joseph.
aurfalien@gmail.com wrote:
Setting up 802.3ad didn't seem to yield anything more than what a single link could provide based on monitoring via iptraf.
That is expected, 802.3ad is for aggregating traffic so many:1 traffic is faster than a single link.
If you want faster throughput than that for a single link I suggest going to 10Gbps for best reliability. Many switch vendors now have low cost 10GbE gear, some even allow you to use CAT5E for it as well to keep you from having to rip out all of your cabling.
Most of my systems use mode=1, though we have an Exanet NAS cluster that uses mode=6 I believe, though the vendor(Exanet) has moved to 802.3ad for their newer versions of software because it's more standardized.
nate
Hi Nate,
Off topic, why Exanet over say Bluearc?
And why bond1 over bond0?
On Oct 6, 2009, at 1:20 PM, nate wrote:
aurfalien@gmail.com wrote:
Setting up 802.3ad didn't seem to yield anything more than what a single link could provide based on monitoring via iptraf.
That is expected, 802.3ad is for aggregating traffic so many:1 traffic is faster than a single link.
If you want faster throughput than that for a single link I suggest going to 10Gbps for best reliability. Many switch vendors now have low cost 10GbE gear, some even allow you to use CAT5E for it as well to keep you from having to rip out all of your cabling.
Most of my systems use mode=1, though we have an Exanet NAS cluster that uses mode=6 I believe, though the vendor(Exanet) has moved to 802.3ad for their newer versions of software because it's more standardized.
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
aurfalien@gmail.com wrote:
Hi Nate,
Off topic, why Exanet over say Bluearc?
Oh how ironic.. We migrated off of our BlueArc onto the Exanet. We still have the 4 racks of BlueArc equipment if you or anyone else is interested, the best offer we've had is $500 from our co-lo provider for 150TB of storage and 3 EOL'd BlueArc head units.
In a nutshell as to why, while BlueArc makes some good NAS technology, certainly very fast, the fact is most other NAS companies have caught up to the point now where they are "fast enough". Our Exanet cluster is running at ~30% CPU.
BlueArc's back end storage is absolute crap, at least their LSI stuff. They refused to support any other storage other than their own or storage from HDS, which makes good stuff but overpriced and overly complicated.
I would of liked to have kept their NAS technology and put it in front of an equally impressive 3PAR back end storage array but they wouldn't have it. I had not heard of Exanet(nor BlueArc) but when I approached 3PAR they brought in Exanet as their NAS partner of choice mainly for performance and cost reasons.
And why bond1 over bond0?
bond1 over bond0? Not sure what your referring to, if you mean mode=1 over mode=0, it's a simpler active/failover design, and these systems don't need more than 1Gbps of throughput a piece.
nate
You mind running Bonnie on your Exanet?
We can compare charts, Exanet vs Bluearc,
Lemme know so I can start preparing the test.
On Oct 6, 2009, at 1:56 PM, nate wrote:
aurfalien@gmail.com wrote:
Hi Nate,
Off topic, why Exanet over say Bluearc?
Oh how ironic.. We migrated off of our BlueArc onto the Exanet. We still have the 4 racks of BlueArc equipment if you or anyone else is interested, the best offer we've had is $500 from our co-lo provider for 150TB of storage and 3 EOL'd BlueArc head units.
In a nutshell as to why, while BlueArc makes some good NAS technology, certainly very fast, the fact is most other NAS companies have caught up to the point now where they are "fast enough". Our Exanet cluster is running at ~30% CPU.
BlueArc's back end storage is absolute crap, at least their LSI stuff. They refused to support any other storage other than their own or storage from HDS, which makes good stuff but overpriced and overly complicated.
I would of liked to have kept their NAS technology and put it in front of an equally impressive 3PAR back end storage array but they wouldn't have it. I had not heard of Exanet(nor BlueArc) but when I approached 3PAR they brought in Exanet as their NAS partner of choice mainly for performance and cost reasons.
And why bond1 over bond0?
bond1 over bond0? Not sure what your referring to, if you mean mode=1 over mode=0, it's a simpler active/failover design, and these systems don't need more than 1Gbps of throughput a piece.
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
aurfalien@gmail.com wrote:
You mind running Bonnie on your Exanet?
We can compare charts, Exanet vs Bluearc,
Lemme know so I can start preparing the test.
Our system is slammed almost 24/7(our disks are sustaining 60ms service times for writes, though front end write response times is around 2-3ms) so I can't get accurate numbers for you. From a blog entry of mine: http://www.techopsguys.com/2009/08/04/123/
I can send you (off-list) some basic iozone numbers I took in the early days of testing, I didn't have the best settings at the time so a lot of is is from cache, not from disk. I plan to add another 100 disks early next year and re-stripe all of the data that should dramatically improve performance.
"better" results are probably gotten from SpecSFS numbers at least you can get something decent to compare with though BlueArc hasn't posted numbers with the new version yet: http://www.spec.org/sfs2008/results/
As I mentioned in another email I don't think the bottleneck is the NAS, it's the disks. Given the load we see today I could double the spindle count to 400 disks(SATA-II) and still not max out a two node Exanet cluster.
nate
Dude, were do you work to sustain or even need such crazy I/O?
Mebbe like a hosting service?
I mean I didn't see a Digital Domain or ILM sig on your email :)
- Brian
On Oct 6, 2009, at 2:37 PM, nate wrote:
aurfalien@gmail.com wrote:
You mind running Bonnie on your Exanet?
We can compare charts, Exanet vs Bluearc,
Lemme know so I can start preparing the test.
Our system is slammed almost 24/7(our disks are sustaining 60ms service times for writes, though front end write response times is around 2-3ms) so I can't get accurate numbers for you. From a blog entry of mine: http://www.techopsguys.com/2009/08/04/123/
I can send you (off-list) some basic iozone numbers I took in the early days of testing, I didn't have the best settings at the time so a lot of is is from cache, not from disk. I plan to add another 100 disks early next year and re-stripe all of the data that should dramatically improve performance.
"better" results are probably gotten from SpecSFS numbers at least you can get something decent to compare with though BlueArc hasn't posted numbers with the new version yet: http://www.spec.org/sfs2008/results/
As I mentioned in another email I don't think the bottleneck is the NAS, it's the disks. Given the load we see today I could double the spindle count to 400 disks(SATA-II) and still not max out a two node Exanet cluster.
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I mean iozone and not bonnie.
Sorry.
On Oct 6, 2009, at 1:56 PM, nate wrote:
aurfalien@gmail.com wrote:
Hi Nate,
Off topic, why Exanet over say Bluearc?
Oh how ironic.. We migrated off of our BlueArc onto the Exanet. We still have the 4 racks of BlueArc equipment if you or anyone else is interested, the best offer we've had is $500 from our co-lo provider for 150TB of storage and 3 EOL'd BlueArc head units.
In a nutshell as to why, while BlueArc makes some good NAS technology, certainly very fast, the fact is most other NAS companies have caught up to the point now where they are "fast enough". Our Exanet cluster is running at ~30% CPU.
BlueArc's back end storage is absolute crap, at least their LSI stuff. They refused to support any other storage other than their own or storage from HDS, which makes good stuff but overpriced and overly complicated.
I would of liked to have kept their NAS technology and put it in front of an equally impressive 3PAR back end storage array but they wouldn't have it. I had not heard of Exanet(nor BlueArc) but when I approached 3PAR they brought in Exanet as their NAS partner of choice mainly for performance and cost reasons.
And why bond1 over bond0?
bond1 over bond0? Not sure what your referring to, if you mean mode=1 over mode=0, it's a simpler active/failover design, and these systems don't need more than 1Gbps of throughput a piece.
nate
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I'm switching a couple of servers from fbsd to CentOS. In fbsd I'm used to ee as the text editor. Is that available in CentOS?
Cheers, Jonas Fornander System Administrator Netwood Communications,LLC Tel: 310-442-1530 Fax: 310-496-0712 "Find Out Why We're Better" ************************************* Now offering Fiber Optic Internet service throughout the continental USA. Speeds start at 5Mbps/2Mbps. Go to www.netwood.net and fill in a prequal request. ************************************* "Home is where my computer is"
Webmaster wrote:
I'm switching a couple of servers from fbsd to CentOS. In fbsd I'm used to ee as the text editor. Is that available in CentOS?
nano is very similar...
GNU nano 1.3.12
... ^G Get Help ^O WriteOut ^R Read File ^Y Prev Page ^K Cut Text ^C Cur Pos ^X Exit ^J Justify ^W Where Is ^V Next Page ^U UnCut Text ^T To Spell
yum install nano...
I'm switching a couple of servers from fbsd to CentOS. In fbsd I'm used to ee as the text editor. Is that available in CentOS?
Not as an official package, but RPMfind appears to have a source RPM for it..
http://www.rpmfind.net//linux/RPM/openpkg/current/SRC/EVAL/ee-1.5.0-20090527...
Source packages are typically installed in
/usr/src/redhat/SOURCES/
and there should be a spec file in
/usr/src/redhat/SPECS/
(working off the cuff) you should be able to do something like:
rpm -ivh ee*.rpm rpmbuild -bb /usr/src/redhat/SPECS/ee.spec
and then after it compiles it (and assuming all the depencies are met), you should have a binary RPM in
/usr/src/redhat/RPMS/
I'm switching a couple of servers from fbsd to CentOS. In fbsd I'm used to ee as the text editor. Is that available in CentOS?
I've just had time to fiddle with the rpmfind version I pointed you to, and it didn't work very well, so I've rebuilt it as a binary RPM...
http://wtf.geek.nz/dl/easyedit-1.5.0-1.tek.i386.rpm
and the spec file in case you're interested is at:
I'll leave the spec file permanent, but may remove the RPM at a random point in the future, so grab it if you want it.