Hi all,
I need to move ten CentOS/RHEL 5.4 vmware guests from a vmware ESXi 4 host to a CentOS 5.4 KVM host (fully patched). But I have two doubts.
First: all these guests have two e1000 network drivers defined as a eth0,eth1 and modprobe.conf file is the same on all:
alias eth0 e1000 alias eth1 e1000 alias net-pf-10 off options ipv6 disable=1
Can I change e1000 driver to virtio_net driver without problems and then update kernel???
Second doubt: all MAC address are defined with vmware range (00:50:56:XX:XX:XX) and I have configured all with HWADDR option under ifcfg-ethX files. Do I need to change all MAC address to KVM range or can I keep vmware range??
I don't need to convert virtual disks because all these guests are installed on an iSCSI luns under Solaris/ZFS servers ...
Many thanks.
I would keep the mac addresses, otherwise the guests will see them as new network cards and require you to make changes.
I would also just use the e1000 emulation. Theres nothing better about the virtio devices...
2010/2/18 compdoc compdoc@hotrodpc.com:
I would also just use the e1000 emulation. Theres nothing better about the virtio devices...
...other than lower CPU utilization and higher throughout? Since these are CentOS/RHEL guests, I wouldn't even consider using e1000, I would just go for virtio_net.
I don't have any links, but some time ago there were some benchmarks (on the KVM development list?), showing the e1000 maxing out at around 300-400mbit/s while the virtio got around 900mbit/s. Even with the much higher throughput, the virtio_net driver still had the same or lower CPU utilization.
Do your own testing if in doubt.
Best Regards Kenni Lund
On Sat, 2010-02-20 at 02:41 +0100, Kenni Lund wrote:
2010/2/18 compdoc compdoc@hotrodpc.com:
I would also just use the e1000 emulation. Theres nothing better about the virtio devices...
...other than lower CPU utilization and higher throughout? Since these are CentOS/RHEL guests, I wouldn't even consider using e1000, I would just go for virtio_net.
I don't have any links, but some time ago there were some benchmarks (on the KVM development list?), showing the e1000 maxing out at around 300-400mbit/s while the virtio got around 900mbit/s. Even with the much higher throughput, the virtio_net driver still had the same or lower CPU utilization.
Do your own testing if in doubt.
Best Regards Kenni Lund
Are you using test-signed drivers or do you have a redistributable source for release-signed drivers (that chain to a microsoft root)?
Of course, things are much simpler if you are using 32-bit M$ guests.
Never got the chance to test relative performance of e1000 vs the netkvm drivers on a M$ guest, but on a centos5 guest, virtio absolutely blows the doors off e1000 performance.
Using ttcp in both directions, typical C5-host-to-C5-guests for e1000 was in the 30-60MB/s range and virtio was in the 300-400MB/s range.
Steve
2010/2/20 S.Tindall tindall.satwth@brandxmail.com:
On Sat, 2010-02-20 at 02:41 +0100, Kenni Lund wrote:
2010/2/18 compdoc compdoc@hotrodpc.com:
I would also just use the e1000 emulation. Theres nothing better about the virtio devices...
...other than lower CPU utilization and higher throughout? Since these are CentOS/RHEL guests, I wouldn't even consider using e1000, I would just go for virtio_net.
I don't have any links, but some time ago there were some benchmarks (on the KVM development list?), showing the e1000 maxing out at around 300-400mbit/s while the virtio got around 900mbit/s. Even with the much higher throughput, the virtio_net driver still had the same or lower CPU utilization.
Do your own testing if in doubt.
Best Regards Kenni Lund
Are you using test-signed drivers or do you have a redistributable source for release-signed drivers (that chain to a microsoft root)?
I don't think that anyone are talking about Windows in this thread? You don't need Microsoft signing of drivers for Linux guests ;)
Never got the chance to test relative performance of e1000 vs the netkvm drivers on a M$ guest, but on a centos5 guest, virtio absolutely blows the doors off e1000 performance.
Using ttcp in both directions, typical C5-host-to-C5-guests for e1000 was in the 30-60MB/s range and virtio was in the 300-400MB/s range.
Exactly... :)
Best Regards Kenni Lund
On Sat, 2010-02-20 at 14:10 +0100, Kenni Lund wrote:
2010/2/20 S.Tindall tindall.satwth@brandxmail.com:
Are you using test-signed drivers or do you have a redistributable source for release-signed drivers (that chain to a microsoft root)?
I don't think that anyone are talking about Windows in this thread?
You are correct. Thought this was a continuation of the windows guest thread (same people) and it is not:
http://lists.centos.org/pipermail/centos-virt/2010-February/001654.html
I missed the change in subject/title. Sorry for the noise.
Steve
Fri 2/19/2010 Kenni Lund kenni@kelu.dk
Do your own testing if in doubt.
OK.
Hosts: Centos 5.4 with kvm, and latest (yum) updates. The switch is a 3com gigabit switch. These are production servers and each has guests doing actual work, but the servers' cpus are idle most of the time.
Guests: Centos 5.4 with latest updates. Only gnome desktop installed (no server components). 2 cpus assigned to each. 2 nics assigned to each. netperf-2.4.4-1.rhel5.x86_64.rpm running on each. 1 nic is e1000. 1 nic is virtio.
E1000 test: Time: 10.03 secs Throughput: 429.76 Utilization: 23.64%
Virtio test: Time: 10.03 secs Throughput: 539.15 Utilization: 5.29%
So there is improvement...