I am working on setting up an NFS server, which will mainly serve files to web servers, and I want to setup two bonds. I have a question regarding *which* bonding mode to use. None of the documentation I have read suggests any mode is "better" than other with the exception of specific use cases (e.g. switch does not support 802.3ad, active-backup).
Since my switch *does* support 802.3ad, including layers 2,3 and 4 hashing, should I use mode=4? Or would one of the other modes be "better" for providing fail-over and link-aggregation, specifically balance-tlb or balance-alb.
Any suggestions are appreciated.
Thanks, Rick
On Mon, Sep 14, 2009 at 10:59 AM, Rick Barnes linux@sitevision.com wrote: 8<
Since my switch *does* support 802.3ad, including layers 2,3 and 4 hashing, should I use mode=4? Or would one of the other modes be "better" for providing fail-over and link-aggregation, specifically balance-tlb or balance-alb.
Are you more interested in load balancing or fault tolerance?
-jonathan
Rick Barnes wrote:
Since my switch *does* support 802.3ad, including layers 2,3 and 4 hashing, should I use mode=4? Or would one of the other modes be "better" for providing fail-over and link-aggregation, specifically balance-tlb or balance-alb.
Use 802.3ad, also if performance is really an issue I'd suggest setting up two networks on the NFS server one with standard frame sizes and the other with jumbo frames, and directly connect systems that need higher throughput to a dedicated VLAN(s) running jumbo frames.
Also note that link aggregation will not increase throughput between hosts, i.e. if you have 1 host talking to 1 server you will not get higher throughput with link aggregation, what link aggregation will do is allow many hosts to communicate to a single host with higher aggregate throughput.
If you want faster, simpler single-stream throughput go with 10GbE, good quality line rate 10GbE switches are very cost effective these days, probably 90%+ cheaper than 10GbE was 5-6 years ago.
Though your likely going to need a lot of disks to be able to come close to saturating even a 1Gbps connection depending on your workload.
nate