Hi,
I am running a VOIP application on Centos 5.3. It is a 16 core box with 12 G of mem and all what it does is passing packets.
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
I have checked google and it talked about interrupt scheduler.
does anyone know how to configure the kernel to allow it to use all CPSs for socket transmission of UDP packets?
Any pointer will be greatly appreciated.
pete
On Wed, Feb 24, 2010 at 2:30 PM, Pete Kay petedao@gmail.com wrote:
Hi,
I am running a VOIP application on Centos 5.3. It is a 16 core box with 12 G of mem and all what it does is passing packets.
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
I have checked google and it talked about interrupt scheduler.
does anyone know how to configure the kernel to allow it to use all CPSs for socket transmission of UDP packets?
Any pointer will be greatly appreciated.
There are some configuration parameters in the xinetd.conf file that relate to CPS for services handled through xinetd. However, it may be worthwhile to configure the service to listen directly to the port in very high usage cases. Not sure if this is what you mean..
Pete Kay wrote:
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
2000 * 64kbit/sec is 128Mbit/sec, not counting any additional protocol overhead are you sending and receiving this data on the same ethernet port? then its double that total traffic. what speed is your ethernet ?
if all this data is flowing through a single UDP port, then its pretty hard to spread it across multiple CPUs as only one application thread is reading or writing from that socket and doing whatever sorts of routing you're doing. you may well be better off with a server that has fewer faster cores.
On Wednesday 24 February 2010 14:55:22 John R Pierce wrote:
Pete Kay wrote:
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
2000 * 64kbit/sec is 128Mbit/sec, not counting any additional protocol
Actually if you are running g.711 over VoIP, the overhead brings it up to something like 87Kbps per direction.
Bobby
Bobby wrote:
On Wednesday 24 February 2010 14:55:22 John R Pierce wrote:
Pete Kay wrote:
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
2000 * 64kbit/sec is 128Mbit/sec, not counting any additional protocol
Actually if you are running g.711 over VoIP, the overhead brings it up to something like 87Kbps per direction.
k, that puts it up around 175Mbit/sec. (and thats just one way...if these are full duplex conversations, then there's that amount of data being recieved AND transmitted from each end. now we're up to 700Mbit/sec if we're looking at transmit and recieve in and out...
Hi
So is that the limit? I have heard people being able to run like 10K call channels before max out CPU cap.
Is this only possible if multiple nics are being used?
Please help.
pete
On Wed, Feb 24, 2010 at 2:53 PM, John R Pierce pierce@hogranch.com wrote:
Bobby wrote:
On Wednesday 24 February 2010 14:55:22 John R Pierce wrote:
Pete Kay wrote:
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
2000 * 64kbit/sec is 128Mbit/sec, not counting any additional protocol
Actually if you are running g.711 over VoIP, the overhead brings it up to something like 87Kbps per direction.
k, that puts it up around 175Mbit/sec. (and thats just one way...if these are full duplex conversations, then there's that amount of data being recieved AND transmitted from each end. now we're up to 700Mbit/sec if we're looking at transmit and recieve in and out...
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Pete Kay wrote:
Hi
So is that the limit? I have heard people being able to run like 10K call channels before max out CPU cap.
I would verify the network throughput of your system to make sure the NIC/switch/etc are functioning normally, I use iperf to do this, really simple tool to use just need two systems.
On a good network you should be able to sustain roughly 900+Mbit/s with standard frame sizes and iperf on a single gigE link(hopefully with no tuning)
sample run:
------------------------------------------------------------ Client connecting to pd1-bgas01, TCP port 5001 TCP window size: 205 KByte (default) ------------------------------------------------------------ [ 3] local 10.16.1.12 port 54559 connected with 10.16.1.11 port 5001 [ 3] 0.0-10.0 sec 1.06 GBytes 912 Mbits/sec
there are lots of options you can use to configure iperf to simulate various types of traffic.
nate
Pete Kay wrote, On 02/24/2010 06:08 PM:
Hi
So is that the limit? I have heard people being able to run like 10K call channels before max out CPU cap.
were those people running g.711 or something using less bandwidth?
And why are you thinking CPU cap? What is the load average (from top or uptime)? In top** (after pressing 1) are you seeing processors that are not idling or are in large wait states? [purpose of these questions is to either get information for us to understand that you have a CPU problem, or point it out to you that you don't]
** there are better applications than top for seeing where the processors are spending their time, but my brain is mush for remembering them right now.
On Wed, Feb 24, 2010 at 5:55 PM, Todd Denniston Todd.Denniston@tsb.cranrdte.navy.mil wrote:
Pete Kay wrote, On 02/24/2010 06:08 PM:
Hi
So is that the limit? I have heard people being able to run like 10K call channels before max out CPU cap.
were those people running g.711 or something using less bandwidth?
And why are you thinking CPU cap? What is the load average (from top or uptime)? In top** (after pressing 1) are you seeing processors that are not idling or are in large wait states? [purpose of these questions is to either get information for us to understand that you have a CPU problem, or point it out to you that you don't]
** there are better applications than top for seeing where the processors are spending their time, but my brain is mush for remembering them right now.
The follow might help
mpstat -P ALL 2 5
Might need to install it
yum install sysstat
See
http://perso.wanadoo.fr/sebastien.godard/
----- "Pete Kay" petedao@gmail.com wrote:
Hi,
I am running a VOIP application on Centos 5.3. It is a 16 core box with 12 G of mem and all what it does is passing packets.
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
I have checked google and it talked about interrupt scheduler.
does anyone know how to configure the kernel to allow it to use all CPSs for socket transmission of UDP packets?
Any pointer will be greatly appreciated.
pete
What kind of NICs are you using?
--Tim
Pete Kay wrote:
Hi,
I am running a VOIP application on Centos 5.3. It is a 16 core box with 12 G of mem and all what it does is passing packets.
What happens is that at around 2K channels running g711 ( 64k) codec, all eth0 is used up and no more traffic can go through.
So that's about 128Mbits per second not counting UDP packet overheads I guesstimate that you cannot get much more than that through a 1G ethernet NIC - it is not CP bound, rather you are nearing the capacity of a single NIC. HTH Rob
I have checked google and it talked about interrupt scheduler.
does anyone know how to configure the kernel to allow it to use all CPSs for socket transmission of UDP packets?
Any pointer will be greatly appreciated.
pete _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos