Hello listmates,
As some of you may know we have been having a really bad problem with Realtek Semiconductor Co., Ltd. RTL-8169 cards. See here for details:
http://forum.nginx.org/read.php?24,140124,140224
So now my question is, what PCI 1 Gbit/s Ethernet adapters should I use under CentOS? If you have had a consistent positive experience with any particular chipset/brand please speak up.
Thanks.
Boris.
On 12/01/2010 08:12 PM, Boris Epstein wrote:
Hello listmates,
As some of you may know we have been having a really bad problem with Realtek Semiconductor Co., Ltd. RTL-8169 cards. See here for details:
http://forum.nginx.org/read.php?24,140124,140224
So now my question is, what PCI 1 Gbit/s Ethernet adapters should I use under CentOS? If you have had a consistent positive experience with any particular chipset/brand please speak up.
Well, Realcrap is known to be crap everywhere. Ask the OpenBSD guys. ;)
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
HTH,
Timo
Thanks.
Boris.
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Steve
On Wed, 1 Dec 2010, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Thirded. :-) Same thing here, even with generic Intel 1 GB Ethernet cards.
******************************************************************************* Gilbert Sebenste ******** (My opinions only!) ****** *******************************************************************************
On Wed, Dec 1, 2010 at 2:29 PM, Gilbert Sebenste sebenste@weather.admin.niu.edu wrote:
On Wed, 1 Dec 2010, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Thirded. :-) Same thing here, even with generic Intel 1 GB Ethernet cards.
Gilbert Sebenste ******** (My opinions only!) ******
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks. Looks good.
I just looked around - looks like manufacturers tend not to list the chipset in their NIC specifications (like here, for instance: http://www.trendnet.com/products/proddetail.asp?prod=140_TEG-PCITXR&cat=... )
Is there a list somewhere out there listing what card features what chipset?
It definitely looks like it is best to just stick to the better chipsets - might be a little more expensive but definitely worth the money.
Thanks.
Boris.
On 12/01/2010 08:33 PM, Boris Epstein wrote:
On Wed, Dec 1, 2010 at 2:29 PM, Gilbert Sebenste sebenste@weather.admin.niu.edu wrote:
On Wed, 1 Dec 2010, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Thirded. :-) Same thing here, even with generic Intel 1 GB Ethernet cards.
Gilbert Sebenste ******** (My opinions only!) ******
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks. Looks good.
I just looked around - looks like manufacturers tend not to list the chipset in their NIC specifications (like here, for instance: http://www.trendnet.com/products/proddetail.asp?prod=140_TEG-PCITXR&cat=... )
Is there a list somewhere out there listing what card features what chipset?
It definitely looks like it is best to just stick to the better chipsets - might be a little more expensive but definitely worth the money.
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Thanks.
Boris.
Timo
Is there a list somewhere out there listing what card features what chipset?
It definitely looks like it is best to just stick to the better chipsets - might be a little more expensive but definitely worth the money.
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Amen !
As for the nics, Intel, Intel, nothing but intel. Since a couple of year, I put nothing but intel nics in our servers. Most of them are on-board 8257*, like 82575EB in our mos recent batch of server. Other add-on card are pro1000 with chipset 82541PI.
Regards,
On Wed, Dec 1, 2010 at 8:36 PM, Timo Schoeler timo.schoeler@riscworks.net wrote:
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Except with CentOS - we get SO much more than we pay for :-D
Bent Terp wrote:
On Wed, Dec 1, 2010 at 8:36 PM, Timo Schoeler timo.schoeler@riscworks.net wrote:
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Except with CentOS - we get SO much more than we pay for :-D
Hah - I was thinking of another angle: so, Timo, you pay for love?
mark "that's not quite what I think of when I use that word...."
On 12/02/2010 04:34 PM, m.roth@5-cent.us wrote:
Bent Terp wrote:
On Wed, Dec 1, 2010 at 8:36 PM, Timo Schoeler timo.schoeler@riscworks.net wrote:
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Except with CentOS - we get SO much more than we pay for :-D
Hah - I was thinking of another angle: so, Timo, you pay for love?
No, I get paid. Billions of dollars. ;P
mark "that's not quite what I think of when I use that word...."
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Timo Schoeler wrote:
On 12/02/2010 04:34 PM, m.roth@5-cent.us wrote:
Bent Terp wrote:
On Wed, Dec 1, 2010 at 8:36 PM, Timo Schoeler timo.schoeler@riscworks.net wrote:
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Except with CentOS - we get SO much more than we pay for :-D
Hah - I was thinking of another angle: so, Timo, you pay for love?
No, I get paid. Billions of dollars. ;P
I'm too old - shouldn't that be "billions and billions"? <g>
mark "do you have a spare million or two from those billions?"
On Thu, Dec 2, 2010 at 10:34 AM, m.roth@5-cent.us wrote:
Bent Terp wrote:
On Wed, Dec 1, 2010 at 8:36 PM, Timo Schoeler timo.schoeler@riscworks.net wrote:
You get what you pay for -- this is a valid rule of thumb throughout the whole life.
Except with CentOS - we get SO much more than we pay for :-D
Hah - I was thinking of another angle: so, Timo, you pay for love?
mark "that's not quite what I think of when I use that word...."
"Rent to own"....
On Wed, Dec 1, 2010 at 2:33 PM, Boris Epstein borepstein@gmail.com wrote:
Thanks. Looks good.
I just looked around - looks like manufacturers tend not to list the chipset in their NIC specifications (like here, for instance: http://www.trendnet.com/products/proddetail.asp?prod=140_TEG-PCITXR&cat=... )
In fact, they lie. All sorts of add-on cards from various vendors, with the same card "type" and model number, have different chipsets.
This has caused me endless grief in environments where my employer refused to do normal kernel updates, and the new chipsets were only compatible with the newer kernel.
Is there a list somewhere out there listing what card features what chipset?
It definitely looks like it is best to just stick to the better chipsets - might be a little more expensive but definitely worth the money.
See above, and yes.
On 12/1/2010 2:33 PM, Boris Epstein wrote:
On Wed, Dec 1, 2010 at 2:29 PM, Gilbert Sebenste sebenste@weather.admin.niu.edu wrote:
On Wed, 1 Dec 2010, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Thirded. :-) Same thing here, even with generic Intel 1 GB Ethernet cards.
Gilbert Sebenste ******** (My opinions only!) ******
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks. Looks good.
I just looked around - looks like manufacturers tend not to list the chipset in their NIC specifications (like here, for instance: http://www.trendnet.com/products/proddetail.asp?prod=140_TEG-PCITXR&cat=... )
Is there a list somewhere out there listing what card features what chipset?
It definitely looks like it is best to just stick to the better chipsets - might be a little more expensive but definitely worth the money.
Thanks.
Boris. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
trendnet is realtek.
On 12/1/2010 2:33 PM, Boris Epstein wrote:
On Wed, Dec 1, 2010 at 2:29 PM, Gilbert Sebenste sebenste@weather.admin.niu.edu wrote:
On Wed, 1 Dec 2010, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Thirded. :-) Same thing here, even with generic Intel 1 GB Ethernet cards.
Gilbert Sebenste ******** (My opinions only!) ******
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Thanks. Looks good.
I just looked around - looks like manufacturers tend not to list the chipset in their NIC specifications (like here, for instance: http://www.trendnet.com/products/proddetail.asp?prod=140_TEG-PCITXR&cat=... )
Is there a list somewhere out there listing what card features what chipset?
It definitely looks like it is best to just stick to the better chipsets - might be a little more expensive but definitely worth the money.
Thanks.
Boris. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
if you look at the pic on that page see the crab? that's realtek.
On Thursday, December 02, 2010 03:28 AM, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Please take note that some Intel 1G nics do not have jumbo frame support at all. So you won't have issues with jumbo frames either way. The question is whether there is jumbo frame support.
On Thu, 2 Dec 2010, Christopher Chan wrote:
On Thursday, December 02, 2010 03:28 AM, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Please take note that some Intel 1G nics do not have jumbo frame support at all. So you won't have issues with jumbo frames either way. The question is whether there is jumbo frame support.
The Intel NICs that I mentioned, 82576 and 82571EB, are both in use in DRBD replication links with an MTU of 9000. We usually configure each such link as a point-to-point bonding pair (balance-rr), and get 1.95 Gb/s throughput with iperf, and about 165 MB/sec with drbd (but of course the latter is disk dependent). CentOS 5.5, x86_64, Dell PE2900 servers. Solid.
Steve
On Dec 1, 2010, at 5:10 PM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
On Thursday, December 02, 2010 03:28 AM, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Please take note that some Intel 1G nics do not have jumbo frame support at all. So you won't have issues with jumbo frames either way. The question is whether there is jumbo frame support.
From my memory those that don't do jumbo were "desktop" versions of the chipset.
I believe all current manufactured models support jumbo frames.
-Ross
On Thursday, December 02, 2010 07:50 AM, Ross Walker wrote:
On Dec 1, 2010, at 5:10 PM, Christopher Chanchristopher.chan@bradbury.edu.hk wrote:
On Thursday, December 02, 2010 03:28 AM, Steve Thompson wrote:
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Seconded. I have a load of Intel 82576 and 82571EB's, and there have been no issues at all, including with Jumbo frames.
Please take note that some Intel 1G nics do not have jumbo frame support at all. So you won't have issues with jumbo frames either way. The question is whether there is jumbo frame support.
From my memory those that don't do jumbo were "desktop" versions of the chipset.
I believe all current manufactured models support jumbo frames.
The Desktop GT series certainly does not have jumbo frames. Not sure about others.
On 12/01/2010 11:17 AM, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Specifically the 82573 chipsets, which are still fairly common on motherboards.
On Wed, Dec 1, 2010 at 8:26 PM, Gordon Messmer yinyang@eburg.com wrote:
On 12/01/2010 11:17 AM, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Specifically the 82573 chipsets, which are still fairly common on motherboards. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
That's good, but I am looking for expansion card NICs.
Boris.
On 12/01/2010 06:07 PM, Boris Epstein wrote:
On Wed, Dec 1, 2010 at 8:26 PM, Gordon Messmeryinyang@eburg.com wrote:
On 12/01/2010 11:17 AM, Timo Schoeler wrote:
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
Specifically the 82573 chipsets, which are still fairly common on motherboards.
That's good, but I am looking for expansion card NICs.
I may not have been clear. The Intel 82573 chipsets are faulty, and should be avoided. I've mostly seen them on motherboards, but I have no reason to believe you won't find them on expansion cards. Avoid them.
On Wed, 1 Dec 2010, Timo Schoeler wrote:
Well, Realcrap is known to be crap everywhere. Ask the OpenBSD guys. ;)
Intel. Broadcom. That's what we use here w/o any issues; however, there are some Intel NICs that are *not* able to handle Jumbo Frames due to an internal design glitch.
I've had Broadcom NICs just go off into their own little world requiring the machine to be physically powered down and back up again before they'd start working again. Replaced with an Intel quad port board (igb driver) and all was decidedly well.
jh
On 12/01/2010 11:11 PM, John Hodrien wrote:
I've had Broadcom NICs just go off into their own little world requiring the machine to be physically powered down and back up again before they'd start working again. Replaced with an Intel quad port board (igb driver) and all was decidedly well.
I've seen that happen, too. I wouldn't recommend Broadcom gear, either.
Boris Epstein wrote:
Hello listmates,
As some of you may know we have been having a really bad problem with Realtek Semiconductor Co., Ltd. RTL-8169 cards. See here for details:
http://forum.nginx.org/read.php?24,140124,140224
So now my question is, what PCI 1 Gbit/s Ethernet adapters should I use under CentOS? If you have had a consistent positive experience with any particular chipset/brand please speak up.
I *think* most of our servers have Broadcoms.
mark
On 12/1/2010 2:12 PM, Boris Epstein wrote:
Hello listmates,
As some of you may know we have been having a really bad problem with Realtek Semiconductor Co., Ltd. RTL-8169 cards. See here for details:
http://forum.nginx.org/read.php?24,140124,140224
So now my question is, what PCI 1 Gbit/s Ethernet adapters should I use under CentOS? If you have had a consistent positive experience with any particular chipset/brand please speak up.
Thanks.
Boris. _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Intel. No need to mess with any Winnic(which is what most are). Broadcom is another good one.
On Wednesday 01 December 2010 20:12:18 Boris Epstein wrote:
Hello listmates,
As some of you may know we have been having a really bad problem with Realtek Semiconductor Co., Ltd. RTL-8169 cards. See here for details:
http://forum.nginx.org/read.php?24,140124,140224
So now my question is, what PCI 1 Gbit/s Ethernet adapters should I use under CentOS? If you have had a consistent positive experience with any particular chipset/brand please speak up.
We have O(1000) of both broadcom and intel (various models) and we've had very few problems over the years with those. I wouldn't hesitate to buy either but given a choice I'd go for intel (since I do think the few hickups we've had, more often than not, struck the broadcoms).
For completeness (since many previous posts have touched on this), we don't use jumbo frames since we have no problem reaching wirespeed with normal 1500 frames.
/Peter
On Thursday 02 December 2010 12:22:38 Christopher Chan wrote:
On Thursday, December 02, 2010 06:53 PM, Peter Kjellström wrote:
For completeness (since many previous posts have touched on this), we don't use jumbo frames since we have no problem reaching wirespeed with normal 1500 frames.
Seriously? What switches?
Mostly procurve, but really, 1G eth goes wirespeed almost regardless what you do to it nowadays. In fact, I can run wirespeed 10G eth through our cisco, procurve and bladnetworks without going jumbo.
IMO lots of people waste time on jumbo frames when there's really no (or very little) need.
/Peter
On Thursday, December 02, 2010 08:28 PM, Peter Kjellström wrote:
On Thursday 02 December 2010 12:22:38 Christopher Chan wrote:
On Thursday, December 02, 2010 06:53 PM, Peter Kjellström wrote:
For completeness (since many previous posts have touched on this), we
don't use jumbo frames since we have no problem reaching wirespeed with
normal 1500 frames.
Seriously? What switches?
Mostly procurve, but really, 1G eth goes wirespeed almost regardless what you do to it nowadays. In fact, I can run wirespeed 10G eth through our cisco, procurve and bladnetworks without going jumbo.
I have HP Procurves too and I don't get wirespeed...
I'll run an artificial benchmark...maybe it's disk i/o in the way.
IMO lots of people waste time on jumbo frames when there's really no (or very little) need.
Maybe...
On 12/02/2010 04:28 AM, Peter Kjellström wrote:
IMO lots of people waste time on jumbo frames when there's really no (or very little) need.
That depends on the protocols in use and your TCP window configuration. Streaming protocols like HTTP may benefit less from jumbo frames (except, as has been noted, for reduced CPU use) in some configurations, but latency-sensitive protocols like CIFS and NFS will benefit greatly.
On Dec 3, 2010, at 2:33 PM, Gordon Messmer yinyang@eburg.com wrote:
On 12/02/2010 04:28 AM, Peter Kjellström wrote:
IMO lots of people waste time on jumbo frames when there's really no (or very little) need.
That depends on the protocols in use and your TCP window configuration. Streaming protocols like HTTP may benefit less from jumbo frames (except, as has been noted, for reduced CPU use) in some configurations, but latency-sensitive protocols like CIFS and NFS will benefit greatly.
If the protocol is latency sensitive then jumbo frames are BAD as it adds more latency because frames take longer to fill, longer to transmit and thus other conversations have to wait longer (poor pipelining/interlacing).
CIFS/NFS aren't really latency sensitive protocols though. If a protocol has a big TCP window then it will not tend to be latency sensitive.
A protocol that is latency sensitive would be iSCSI with sequential IO. Each iSCSI PDU is (default) 8K which is a lot smaller then say the 32K CIFS/NFS protocol units one sees, so bumping up the latency just a tad will mean you will only get 25MB/s 4k sequential throughput instead of 40MB/s. Bumping up the iSCSI PDU helps but limits the total number of simultaneous iSCSI PDUs a target can handle.
That's why typically on 1Gbe iSCSI, it's better to use 1500 MTU instead of 9000. With 10Gbe though it uses a lot of CPU to send 1500 byte frames, so there it's better using 9000 MTU.
Everything is a compromise due to finite resources.
-Ross
On 12/03/2010 03:48 PM, Ross Walker wrote:
If the protocol is latency sensitive then jumbo frames are BAD as it adds more latency because frames take longer to fill, longer to transmit and thus other conversations have to wait longer (poor pipelining/interlacing).
CIFS/NFS aren't really latency sensitive protocols though. If a protocol has a big TCP window then it will not tend to be latency sensitive.
I measure better throughput on NFS with jumbo frames than without. Measurement trumps assertions. :)
On Dec 3, 2010, at 7:48 PM, Gordon Messmer yinyang@eburg.com wrote:
On 12/03/2010 03:48 PM, Ross Walker wrote:
If the protocol is latency sensitive then jumbo frames are BAD as it adds more latency because frames take longer to fill, longer to transmit and thus other conversations have to wait longer (poor pipelining/interlacing).
CIFS/NFS aren't really latency sensitive protocols though. If a protocol has a big TCP window then it will not tend to be latency sensitive.
I measure better throughput on NFS with jumbo frames than without. Measurement trumps assertions. :)
All I was trying to get across is that jumbo frames aren't to be used in latency sensitive applications as it adds latency.
As your findings show NFS is not a latency sensitive application and thus why you see better throughput with jumbo frames. That is also why NFS/CIFS can be used over a WAN while a latency sensitive protocol such as iSCSI is almost useless over the WAN.
-Ross
For completeness (since many previous posts have touched on this), we don't use jumbo frames since we have no problem reaching wirespeed with normal 1500 frames.
Jumbo frames have advantages other than "reaching wirespeed". Its use produces less overhead and in general less CPU utilization. Your network will see less trafic and your CPUs will be free to do other work.
On Dec 2, 2010, at 11:47 AM, miguelmedalha@sapo.pt wrote:
For completeness (since many previous posts have touched on this), we don't use jumbo frames since we have no problem reaching wirespeed with normal 1500 frames.
Jumbo frames have advantages other than "reaching wirespeed". Its use produces less overhead and in general less CPU utilization. Your network will see less trafic and your CPUs will be free to do other work.
True, like always it depends on your CPU and your network application's sensitivity to latency.
Jumbo frames equals less CPU but more network latency, more network latency equals less achievable throughput.
You don't need jumbo with 1Gbe and a current CPU, but for 10Gbe, jumbo frames are recommended, unless you can dedicate a core per 10Gbe NIC.
-Ross