Hello All,
I am currently using bonding with 2 NICs (using mode 0). Its been working well, but I am trying to understand how it works (I am a total newbie).
mode=0 (balance-rr) Round-robin policy: Transmit packets in sequential order from the first available slave through the last. This mode provides load balancing and fault tolerance.
So I have 2 NICs (1 NIC attached to switch A, 2nd NIC attached to switch B).
I have 1 virtual interface. bon0.
Suppose data is being pushed out, it will go with 1st NIC and when it gets overloaded it will use 2nd NIC. The bonding driver will be responsible for it.
Similar to the push, the pull will be very similar. The data gets pulled and the bonding driver will assemble the packets together? Does this sound right?
Sorry for a newbie question...
TIA
On Sat, Sep 6, 2008 at 11:57, Mag Gam magawake@gmail.com wrote:
Suppose data is being pushed out, it will go with 1st NIC and when it gets overloaded it will use 2nd NIC.
No. If you are using "balance-rr", one packet will go through the 1st NIC, and the next packet will go through the 2nd one. That's what "rr" (round-robin) means.
Similar to the push, the pull will be very similar. The data gets pulled and the bonding driver will assemble the packets together? Does this sound right?
Actually this will not be determined by the bonding driver, it will be determined by the switch that is actually "pushing" the packets. The bonding driver will only make it look like the packets are coming from one (bonded) interface only.
How the switch will behave depends on its configuration. It may be configured to send all the data through one of the interfaces only to balance through both of them using round-robin or something else.
You should try to read this, it's very complete: /usr/share/doc/iputils-*/README.bonding
Also, if your switch supports it, you should try to use the 802.3ad mode (mode=4) since that will probably give you the best results with bonding (in terms of load balancing and fault tolerance).
HTH, Filipe
Filipe:
Thankyou! Your explanation helps a lot. Its makes more sense than reading mundane manuals :-)
Actually, would there be a big performance boost when using mode4? Currently I am seeing 95% total throughput. Which isn't that bad. I am peaking at 238MB/sec (each gig/e connections)
Also, mode0 does fault tolerance, meaning if a switch failure occurs we should still be good, but how would the packets then be transferred? I suppose rr would be disabled since it won't need to alternate, correct?
On Sat, Sep 6, 2008 at 12:15 PM, Filipe Brandenburger filbranden@gmail.com wrote:
On Sat, Sep 6, 2008 at 11:57, Mag Gam magawake@gmail.com wrote:
Suppose data is being pushed out, it will go with 1st NIC and when it gets overloaded it will use 2nd NIC.
No. If you are using "balance-rr", one packet will go through the 1st NIC, and the next packet will go through the 2nd one. That's what "rr" (round-robin) means.
Similar to the push, the pull will be very similar. The data gets pulled and the bonding driver will assemble the packets together? Does this sound right?
Actually this will not be determined by the bonding driver, it will be determined by the switch that is actually "pushing" the packets. The bonding driver will only make it look like the packets are coming from one (bonded) interface only.
How the switch will behave depends on its configuration. It may be configured to send all the data through one of the interfaces only to balance through both of them using round-robin or something else.
You should try to read this, it's very complete: /usr/share/doc/iputils-*/README.bonding
Also, if your switch supports it, you should try to use the 802.3ad mode (mode=4) since that will probably give you the best results with bonding (in terms of load balancing and fault tolerance).
HTH, Filipe _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sat, Sep 6, 2008 at 13:11, Mag Gam magawake@gmail.com wrote:
Actually, would there be a big performance boost when using mode4?
Not necessarily, since balance-rr already gives you load-balancing. They actually implement it differently. balance-rr can spread packets of the same TCP connection across the two links, so you may use your links more, but with the side effect of having your packets delivered out of order. In 802.3ad all packets of a single TCP connection will use the same link, this means your links will not be as balanced as what you get with balance-rr, but it will not require reordering on the other side of the connection. Check section 12.1.1 in /usr/share/doc/iputils-*/README.bonding . In any case, you should evaluate what your needs are and tune for that.
Currently I am seeing 95% total throughput.
If you have only a few clients doing huge transfers, 802.3ad will probably not be as good as balance-rr for that. Again, you should tune it for your needs.
Which isn't that bad. I am peaking at 238MB/sec (each gig/e connections)
I believe you mean 238MB/sec on both interfaces, since 1Gbps = 125MB/s.
Also, mode0 does fault tolerance, meaning if a switch failure occurs we should still be good, but how would the packets then be transferred? I suppose rr would be disabled since it won't need to alternate, correct?
Actually balance-rr is still there, it is only doing round-robin of one interface only. Remember, you could have a bonding of 3, 4 or more interfaces, in that case if you loose one you still have more than one to balance traffic through.
Filipe
So, I decided to go with mode 6 since my network admin says thats supported at my college.
I have everything working perfectly however I still get an occasional packet drop which is not good.
http://www.howtoforge.com/network_card_bonding_centos
By reading the HOWTO and README.txt I am not sure if I am missing anything else. Has anyone else configured this before?
TIA
On Sat, Sep 6, 2008 at 2:22 PM, Filipe Brandenburger filbranden@gmail.com wrote:
On Sat, Sep 6, 2008 at 13:11, Mag Gam magawake@gmail.com wrote:
Actually, would there be a big performance boost when using mode4?
Not necessarily, since balance-rr already gives you load-balancing. They actually implement it differently. balance-rr can spread packets of the same TCP connection across the two links, so you may use your links more, but with the side effect of having your packets delivered out of order. In 802.3ad all packets of a single TCP connection will use the same link, this means your links will not be as balanced as what you get with balance-rr, but it will not require reordering on the other side of the connection. Check section 12.1.1 in /usr/share/doc/iputils-*/README.bonding . In any case, you should evaluate what your needs are and tune for that.
Currently I am seeing 95% total throughput.
If you have only a few clients doing huge transfers, 802.3ad will probably not be as good as balance-rr for that. Again, you should tune it for your needs.
Which isn't that bad. I am peaking at 238MB/sec (each gig/e connections)
I believe you mean 238MB/sec on both interfaces, since 1Gbps = 125MB/s.
Also, mode0 does fault tolerance, meaning if a switch failure occurs we should still be good, but how would the packets then be transferred? I suppose rr would be disabled since it won't need to alternate, correct?
Actually balance-rr is still there, it is only doing round-robin of one interface only. Remember, you could have a bonding of 3, 4 or more interfaces, in that case if you loose one you still have more than one to balance traffic through.
Filipe _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Mon, Nov 10, 2008 at 11:17:57PM -0500, Mag Gam wrote:
So, I decided to go with mode 6 since my network admin says thats supported at my college.
I have everything working perfectly however I still get an occasional packet drop which is not good.
Occasional....??? Except on a dedicated point to point link, packet drop is normal up to a point. What is the rate of loss and your expectation.