Does anyone has implemented this sucessfully?
I am asking this because we are implementing Xen on our test lab machines, which they hold up to three 3com and intel Nics 10/100mbps based.
These servers are meant to replace MS messaging and intranet webservers which holds up to 5000 hits per day and thousands of mails, and probably the Dom0 could not handle this kind of setup with only one 100mbps link, and could not afford changing all the networking hardware to gigabit, at least not yet.
Any pointers perhaps?
Greetings from Mexico.
on 7-15-2008 1:34 PM Victor Padro spake the following:
Does anyone has implemented this sucessfully?
I am asking this because we are implementing Xen on our test lab machines, which they hold up to three 3com and intel Nics 10/100mbps based.
These servers are meant to replace MS messaging and intranet webservers which holds up to 5000 hits per day and thousands of mails, and probably the Dom0 could not handle this kind of setup with only one 100mbps link, and could not afford changing all the networking hardware to gigabit, at least not yet.
Any pointers perhaps?
Greetings from Mexico.
How fast is your incoming connection? If you have a data line from the outside world that can saturate a 100 Mbit network card, you can afford new cards.
How fast is your incoming connection? If you have a data line from the
outside world that can saturate a 100 Mbit network card, you can afford new cards.
We got 2 T1 and 3 ADSL 4mbps lines we're not too worry about the incoming connections, because most of the traffic it's generated inside, we cannot afford changing the whole network structure, that will be changing cisco 10/100mbps switches, routers, nics, wiring infrastructure(we're on Cat5), maybe some server Nics and the boss way of seeing things (the hardest part I think). It WAS a really sucess that the test lab to be approved you know... :)
We have like 200 actively users that generates mails, web based reports, Siebel access, etc. We have had certain bottlenecks on the mailgateways and in the sugarcrm domU's, that's why I am trying to figure it out how to use bonding or even using one dedicated nic for mailgateway, crm, and so on, but we're limited to 3-4 nics per machine.
Any Idea?
"Victor Padro" vpadro@gmail.com writes:
Does anyone has implemented this sucessfully?
I have not used bonding with xen, but once you have a bonded interface in the Dom0 it should be trivial. setup your bonded interface as usual, then in /etc/xend-config.sxp where it says (network-script network-bridge) set it to
(network-script 'network-bridge netdev=bond0')
it should just work.
These servers are meant to replace MS messaging and intranet webservers which holds up to 5000 hits per day and thousands of mails, and probably the Dom0 could not handle this kind of setup with only one 100mbps link, and could not afford changing all the networking hardware to gigabit, at least not yet.
100Mbps is a whole lot of bandwidth for a webserver unless you are serving video or large file downloads or something. 100Mbps is enough to choke a very powerful mailserver, nevermind exchange.
I suspect that if you are using windows on Xen, disk and network I/O to and from the windows DomU will be a bigger problem than network speeds. Are you using the paravirtualized windows drivers? without them, network and disk IO is going to feel pretty slow in windows, no matter how fast the actual network or disk is.
I have not used bonding with xen, but once you have a bonded interface in the Dom0 it should be trivial. setup your bonded interface as usual, then in /etc/xend-config.sxp where it says (network-script network-bridge) set it to
(network-script 'network-bridge netdev=bond0')
it should just work.
Somewhere I read that a while ago...will make notes about that, thank you.
100Mbps is a whole lot of bandwidth for a webserver unless you are serving
video or large file downloads or something. 100Mbps is enough to choke a very powerful mailserver, nevermind exchange.
Webservers are used to upload video and audio conferences and even stream them across the LAN and access SugarCRM to download/view reports, etc. If we use only one M$ exchange server sometimes gets bottlenecked with all those kinds of mails sent. avi's mpeg's videos, wav's, mp3's, excel and powerpoint docs, etc. But we have 2 backups that can handle all just fine, so we're trying to replace them with a Xen cluster based on Centos and postfix.
I suspect that if you are using windows on Xen, disk and network I/O to and from the windows DomU will be a bigger problem than network speeds. Are you using the paravirtualized windows drivers? without them, network and disk IO is going to feel pretty slow in windows, no matter how fast the actual network or disk is.
We're not using windows under Xen, we're trying to get rid of M$(reducing licensing fees mostly). We use CentOS for SugarCRM and Debian for DNS, but want to use CentOS for everything if we could.
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Victor Padro Sent: Wednesday, 16 July 2008 8:34 AM To: CentOS mailing list Subject: Re: [CentOS] Bonding and Xen
We're not using windows under Xen, we're trying to get rid of M$(reducing licensing fees mostly).
Pray tell, what are you replacing Exchange with? My end users are dependent on Exchange for calendaring but organization is becoming more and more anti MS and it won't be long before they decide enough is enough, especially being that some savvy users refused to upgrade to Vista and are now using CentOS as their preferred desktop. These savvy users are software developers who were previously using XP and multiple PuTTY sessions to get the job done!
Cheers, AK.
From: Anthony Kamau
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Victor Padro Sent: Wednesday, 16 July 2008 8:34 AM To: CentOS mailing list Subject: Re: [CentOS] Bonding and Xen
We're not using windows under Xen, we're trying to get rid of
M$(reducing
licensing fees mostly).
Pray tell, what are you replacing Exchange with? My end users are
dependent
on Exchange for calendaring but organization is becoming more and more
anti
MS and it won't be long before they decide enough is enough, especially being that some savvy users refused to upgrade to Vista and are now using CentOS as their preferred desktop. These savvy users are software developers who were previously using XP and multiple PuTTY sessions to get the job done!
I am currently testing Sun Java Communications Suite 5 on Centos 5.2 http://www.sun.com/software/communications_suite/index.xml which seems to be a good alternative for Exchange. There is a free Outlook connector and the documentation is very good. Its free, if you don't need technical support.
Regards Lars
On Tuesday 15 July 2008 22:34:56 Victor Padro wrote:
Does anyone has implemented this sucessfully?
I am asking this because we are implementing Xen on our test lab machines, which they hold up to three 3com and intel Nics 10/100mbps based.
These servers are meant to replace MS messaging and intranet webservers which holds up to 5000 hits per day and thousands of mails, and probably the Dom0 could not handle this kind of setup with only one 100mbps link, and could not afford changing all the networking hardware to gigabit, at least not yet.
Any pointers perhaps?
Just go the normal way. As long as you are not using VLANs on top of bonds the default bridgescripts should do just fine. Before starting with using a bond as no active/backup configuration I urge you to read and understand the bonding.txt from the kernel-source: http://www.mjmwired.net/kernel/Documentation/networking/bonding.txt or just websearcch: bonding.txt The problem is not to configure the bonding on a linux-machine but to get the network setup right (Etherchannel, LACP, one switch or multiple switches, etc.) and know what to expect from which setup.
And last but not least human communication between network guys and os-guys.
That are the biggest problem with bonding in my experience.
-marc.
Greetings from Mexico.
-- "It is human nature to think wisely and act in an absurd fashion."
"Todo el desorden del mundo proviene de las profesiones mal o mediocremente servidas"
On Tue, Jul 15, 2008 at 03:34:56PM -0500, Victor Padro wrote:
Does anyone has implemented this sucessfully?
Yes and no. :/
I am asking this because we are implementing Xen on our test lab machines, which they hold up to three 3com and intel Nics 10/100mbps based.
These servers are meant to replace MS messaging and intranet webservers which holds up to 5000 hits per day and thousands of mails, and probably the Dom0 could not handle this kind of setup with only one 100mbps link, and could not afford changing all the networking hardware to gigabit, at least not yet.
Any pointers perhaps?
This is not CentOS specific, nor RHEL & clones, but a general Linux issue: bonding and bridging is broken leading to loops on the switch unless the switch is intelligent enough to do trunking on its side.
The problem is that the outgoing packages from the virtual xen bridge are seen by the other bonding memebers and the learned mac addresses on the xen bridge toggle from the VM to the outside interface.
I had posted this issue on the respective lists, but nothing happened - ideally the bridge code would allow for static macs.
This then indirectly affects anything that uses a Linux bridge, including xen and most other virtual solutions. If you google for bonding on each of them you will find trouble reports all over.
So you options are:
a) use only active/backup type solutions to avoid loops. b) use an inteligent swicth that is able to trunk ports and therefore does not generate the loops. But then these servers cannot be PXE/DHCP booted anymore (for reinstalling them).
I had these issues with the 2x1GB setup on the ATrpms servers and lost a lot of hair over it. The increased throughput was there at the end, but maybe I'd preferred to keep my hair ...