I upgraded one of my old machines running 5.x to the latest kernel (from 308.24.1 to 348.1.1). After rebooting network connectivity was gone. I rebooted with the old kernel, I also tried the one before it (308.20.1) still no luck. So I assume it's got nothing to do with the kernel or even CentOS. But a hardware failure seems also unlikely, see below. ethtool shows the link as up and if I remove the cable as down. I attached a laptop via crossover cable, it detects the link, but same problem. I disabled iptables and set selinux to disabled. No change. There's a Xen VM running on that machine and I can ping it from the hardware. So, internal networking seems to be ok. I'm using bridged networking for Xen connectivity, setup by normal Red Hat means, not via Xen. Never had a problem. There are no errors in the logs, except for dhcpd telling network is down and named is also giving some weird errors. This is my only dhcpd, so I would like to have it up ASAP :-( Is there anything else besides a weird hardware failure that I could check? I'm going to get a new card tomorrow and see if that changes the situation. This is mobo internal networking based on nforce-MCP61. Has anyone seen such a hardware failure where the link goes up but no packets go over the wire? It seems a bit unlikely that this hardware failure (and nothing else) should happen on a reboot after an upgrade. Thanks. Kai