I have a computer I am using to host a virtual machine. Centos 6, for both, 64 bit.
The host machine's network connection seems fine. No problems. Trying to access the virtual machine is usually fine.
but then, poof, ssh, http, ftp, all lose connection for about a minute. Then they come back up.
I looked in all the logs on both machines, could find nothing, but not sure where to look.
My question, would this be a setting on the VM as a webserver, some new centos 6 setting that just times out network when not in use? Or something that I did when I bonded my eth ports and bridged them?
The bond covers the two onboard eth ports and one port from an add on network card.
It is intermittent, seems to happen whenever, but service network restart on the webserver seems to fix it immediately, but it also just fixes itself too.
is there some setting with centos 6 that must be changed to allow constant 'uptime' of the network?
On 02/03/2012 03:41 PM, Bob Hoffman wrote:
I have a computer I am using to host a virtual machine. Centos 6, for both, 64 bit.
The host machine's network connection seems fine. No problems. Trying to access the virtual machine is usually fine.
but then, poof, ssh, http, ftp, all lose connection for about a minute. Then they come back up.
I looked in all the logs on both machines, could find nothing, but not sure where to look.
My question, would this be a setting on the VM as a webserver, some new centos 6 setting that just times out network when not in use? Or something that I did when I bonded my eth ports and bridged them?
The bond covers the two onboard eth ports and one port from an add on network card.
It is intermittent, seems to happen whenever, but service network restart on the webserver seems to fix it immediately, but it also just fixes itself too.
is there some setting with centos 6 that must be changed to allow constant 'uptime' of the network? _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Please share more information on your configuration, including configuration files. Particularly your eth, bond and bridge configurations.
This only happens every so often, not like every 5 minutes. When I woke up today and tried to access VM I could not. Via the actual host I could see it was running and actually use it via virtual manager. I was only able to access ftp, http, etc after a network restart. The network restart shut down the connection, it did not fail then start, it shut down and reupped.
On the virtual machine, which is having the issue, this is my eth0 (I have been commenting out things to see if anything changes the issue.)
DEVICE="eth0" NM_CONTROLLED="no" ONBOOT=yes HWADDR=52:54:00:D1:B3:46 TYPE=Ethernet BOOTPROTO=none IPADDR=199.204.135.123 #PREFIX=28 GATEWAY=199.204.135.113 DNS1=8.8.8.8 DNS2=8.8.4.4 DOMAIN=mike.hoffmanartdesign.com #DEFROUTE=yes #IPV4_FAILURE_FATAL=no #IPV6INIT=no NAME="System eth0" UUID=5fb06bd0-0bb0-7ffb-45f1-d6edd65f3e03 usrctl=no
on the main machine which experiences no problems.
eth0 and 1 onboard, eth 2 add on card, all are the same except hwaddress, uuid, etc. DEVICE="eth1" NM_CONTROLLED="no" ONBOOT=yes HWADDR=00:25:90:2B:23:C3 TYPE=Ethernet MASTER=bond0 SLAVE=yes BOOTPROTO=none NAME="System eth1" UUID=9c92fad9-6ecb-3e6c-eb4d-8a47c6f50c04 usrctl=no
(tried playing with bonding mode, but got nowhere there, again, the host machine never has an issue)
DEVICE="bond0" NM_CONTROLLED="no" BOOTPROTO=none BRIDGE=vbr0 ONBOOT=yes userctl=no #IPADDR=199.204.135.120 #NETWORK=199.204.135.112 #NETMASK=255.255.255.240 #GATEWAY=199.204.135.113 BONDING_OPTS="mode=6 miimon=100"
DEVICE="vbr0" NM_CONTROLLED="no" TYPE="Bridge" BOOTPROTO="none" IPADDR=199.204.135.120 NETWORK=199.204.135.112 NETMASK=255.255.255.240 GATEWAY=199.204.135.113 DNS1=8.8.8.8 DNS2=8.8.4.4 ONBOOT=yes userctl=no
not sure why vnet0 shows up in ifconfig, it is there and sometimes it is not depending on boot....
[root@main ~]# ifconfig bond0 Link encap:Ethernet HWaddr 00:25:90:2B:23:C2 inet6 addr: fe80::225:90ff:fe2b:23c2/64 Scope:Link UP BROADCAST RUNNING MASTER MULTICAST MTU:1500 Metric:1 RX packets:676 errors:0 dropped:0 overruns:0 frame:0 TX packets:6754 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:68718 (67.1 KiB) TX bytes:976214 (953.3 KiB)
eth0 Link encap:Ethernet HWaddr 00:25:90:2B:23:C2 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:653 errors:0 dropped:0 overruns:0 frame:0 TX packets:3706 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:67104 (65.5 KiB) TX bytes:792687 (774.1 KiB) Memory:fbe60000-fbe80000
eth1 Link encap:Ethernet HWaddr 00:25:90:2B:23:C3 UP BROADCAST RUNNING SLAVE MULTICAST MTU:1500 Metric:1 RX packets:23 errors:0 dropped:0 overruns:0 frame:0 TX packets:3048 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:1614 (1.5 KiB) TX bytes:183527 (179.2 KiB) Memory:fbee0000-fbf00000
eth2 Link encap:Ethernet HWaddr 00:1B:21:C6:E2:67 UP BROADCAST SLAVE MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b) Interrupt:26 Memory:fbd20000-fbd40000
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:0 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:0 (0.0 b)
vbr0 Link encap:Ethernet HWaddr 00:25:90:2B:23:C2 inet addr:199.204.135.120 Bcast:199.204.135.127 Mask:255.255.255.240 inet6 addr: fe80::225:90ff:fe2b:23c2/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:263 errors:0 dropped:0 overruns:0 frame:0 TX packets:183 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:24452 (23.8 KiB) TX bytes:77041 (75.2 KiB)
virbr0 Link encap:Ethernet HWaddr 52:54:00:9E:3F:9D inet addr:192.168.122.1 Bcast:192.168.122.255 Mask:255.255.255.0 UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:0 errors:0 dropped:0 overruns:0 frame:0 TX packets:12 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:0 (0.0 b) TX bytes:3066 (2.9 KiB)
vnet0 Link encap:Ethernet HWaddr FE:54:00:D1:B3:46 inet6 addr: fe80::fc54:ff:fed1:b346/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:286 errors:0 dropped:0 overruns:0 frame:0 TX packets:505 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:500 RX bytes:483086 (471.7 KiB) TX bytes:47788 (46.6 KiB)
On virtual machine [root@mike ~]# ifconfig eth0 Link encap:Ethernet HWaddr 52:54:00:D1:B3:46 inet addr:199.204.135.123 Bcast:199.204.135.255 Mask:255.255.255.0 inet6 addr: fe80::5054:ff:fed1:b346/64 Scope:Link UP BROADCAST RUNNING MULTICAST MTU:1500 Metric:1 RX packets:94981 errors:0 dropped:0 overruns:0 frame:0 TX packets:69706 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:1000 RX bytes:83070095 (79.2 MiB) TX bytes:15019279 (14.3 MiB)
lo Link encap:Local Loopback inet addr:127.0.0.1 Mask:255.0.0.0 inet6 addr: ::1/128 Scope:Host UP LOOPBACK RUNNING MTU:16436 Metric:1 RX packets:50 errors:0 dropped:0 overruns:0 frame:0 TX packets:50 errors:0 dropped:0 overruns:0 carrier:0 collisions:0 txqueuelen:0 RX bytes:4153 (4.0 KiB) TX bytes:4153 (4.0 KiB)
On 2/3/2012 3:45 PM, Digimer wrote:
lman/listinfo/centos Please share more information on your configuration, including configuration files. Particularly your eth, bond and bridge configurations.
Continuing in my venture to resolve the network issue on the virtual machine on my centos 6 host machine.
The intermittent 'closing' off all network conditions can only be solved by service network restart
My last thing I am going to try is to look at the different NIC devices I can use in virtual machine manager. The one i was using is the default I guess, virtio.
The other options are e1000 ne3k_pci rtl8130 hypervisor default
The e1000 has thousands of pages online about issues with centos, so will try that last. Gonna try hypervisor default.
Other than changing this option I can find no reason for the network services to just go missing with nothing in the logs...they just disappear from the net.
----------------------- snip centos 6 host, centos 6 virtual machine. Network connection from outside server disappears in regards to the virtual server.
snip -----------------------
Tested the heck out of it.
Further testing shows the network unreachable, even if network restarted in host. A simple ping from the virtual machine going out allows full traffic both ways. Intermittent time out, it may last 5 minutes, it may last an hour, but it will disappear.
I found a small config sample on an old website in the middle of nowhere. It had an addition to the /etc/sysconfig/network file of the host. it added "GATEWAY=br0"
I did this and restarted the network service and I believe without me doing anything, this appeared in the virtual machine message logs, something that had never appeared before.
kernel: Bridge firewalling registered
so hoping that is it...not sure if I would need to do that on the virtual machine or not. I have been days at this. Literally rewrote every single ifcfg and conf file I could find. Tried hundreds of permutations. Nothing has worked.
If this does not work I am willing to pay someone to end my week long battle with the virtual machine network being unreachable no matter what I do. But only if you actually know this stuff and actually have experience with the virtual machine bridging..and hopefully have seen this type of issue...
<banging head on wall> <repeat> <rinse> <repeat>
Last post on this, sorta solved.
original post: -----------------------------------
I have a computer I am using to host a virtual machine. Centos 6, for both, 64 bit. The host machine's network connection seems fine. No problems. Trying to access the virtual machine is usually fine. but then, poof, ssh, http, ftp, all lose connection for about a minute. Then they come back up. I looked in all the logs on both machines, could find nothing, but not sure where to look. My question, would this be a setting on the VM as a webserver, some new centos 6 setting that just times out network when not in use? Or something that I did when I bonded my eth ports and bridged them? The bond covers the two onboard eth ports and one port from an add on network card. It is intermittent, seems to happen whenever, but service network restart on the webserver seems to fix it immediately, but it also just fixes itself too. is there some setting with centos 6 that must be changed to allow constant 'uptime' of the network? ------------------------------
I took out the bond and found that was the issue. works fine without it. However, I also brought up a second vm and found something interesting.
1- with two vms, only one failed, the other stayed up 100% of the time. 2- second NIC card was not working well, but even taken out did not solve issue. 3- pinging system I found the vm that brought up vnet0 had the exact same pings as the host, the vnet1 vm had double. 4- no matter what order the vms were brought up, whichever got assigned libvirts vnet0 would fail, the other would not fail at all.
5- the ping of the host and the vnet0 assigned VM were exactly the same every time, the vnet1 vm was a little more than double that (12ms versus 28ms).
6- the host never lost connection, but is using the same bridge and bond to connect.
It has become logical in my thought process that the host and the first vm are somehow in conflict, and the host wins....via the bond software. It seems like with vms, the host should not be connected to the bond and that might work. But I am way too over this to test it out.
Sharing the bridge and the bond makes me feel the first virtual machine brought up, assigned libvirt's vnet1 eventually lost some arp contest to the host.
A third vm was added, never failed if not brought up first, and had the same ping rate as the vnet1, double the host and the vnet0 virtual machine.
What is causing that is beyond my knowledge and is for experts on libivrt's vnet system, bond software, and possibly eth bridges. All I know is the host never failed even though it was using the same bond/bridge and maybe that is the real issue. In a vm environment maybe the host should have its on connection NOT on the bond shared by the VMs?
Using physical bridges may have confused bond with that first vm coming up.....
well, that is a long couple weeks work. RIght now I am just going to assign the eths direct to the bridge and forget bonding as really bad nightmare. I hope someone tests this out a bit and comes up with a brilliant yet really techy solution.