I'm running CentOS 5.1 with all updates, and the xen kernel. For some reason the OS is not seeing the full amount of ram.
#uname -a Linux CentOS-VM-A 2.6.18-53.1.21.el5xen #1 SMP Tue May 20 10:03:27 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
# free total used free shared buffers cached Mem: 6104064 3445136 2658928 0 1412236 1515032 -/+ buffers/cache: 517868 5586196 Swap: 2031608 0 2031608
On another box with identical hardware, I see # free total used free shared buffers cached Mem: 5038080 1818980 3219100 0 145208 1148608 -/+ buffers/cache: 525164 4512916 Swap: 2031608 152 2031456
Both of these boxes have 8GB of ram. Is there a reason I'm not seeing all of it?
Russ
Ruslan Sivak wrote:
I'm running CentOS 5.1 with all updates, and the xen kernel. For some reason the OS is not seeing the full amount of ram. #uname -a Linux CentOS-VM-A 2.6.18-53.1.21.el5xen #1 SMP Tue May 20 10:03:27 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
# free total used free shared buffers cached Mem: 6104064 3445136 2658928 0 1412236 1515032 -/+ buffers/cache: 517868 5586196 Swap: 2031608 0 2031608
whats cat /proc/meminfo say?
what do you see at the top of `dmesg` relating to memory (first 100 or so lines).
like, on a x86_64 RHEL4 quad opteron 850 box I have here w/ 8GB, I see...
# free total used free shared buffers cached Mem: 8005256 7953140 52116 0 78172 7287724 -/+ buffers/cache: 587244 7418012 Swap: 8385920 208 8385712
# dmesg|more Bootdata ok (command line is ro root=LABEL=/ rhgb quiet) Linux version 2.6.9-55.0.6.ELsmp (brewbuilder@ls20-bc2-14.build.redhat.com) (gcc version 3.4.6 20060404 (Red Hat 3.4.6-8)) #1 SMP Thu Aug 23 11:13:21 EDT 2007 BIOS-provided physical RAM map: BIOS-e820: 0000000000000000 - 000000000009f400 (usable) BIOS-e820: 000000000009f400 - 00000000000a0000 (reserved) BIOS-e820: 00000000000f0000 - 0000000000100000 (reserved) BIOS-e820: 0000000000100000 - 00000000f57fa000 (usable) BIOS-e820: 00000000f57fa000 - 00000000f5800000 (ACPI data) BIOS-e820: 00000000fdc00000 - 00000000fdc01000 (reserved) BIOS-e820: 00000000fdc10000 - 00000000fdc11000 (reserved) BIOS-e820: 00000000fdc20000 - 00000000fdc21000 (reserved) BIOS-e820: 00000000fdc30000 - 00000000fdc31000 (reserved) BIOS-e820: 00000000fec00000 - 00000000fec01000 (reserved) BIOS-e820: 00000000fec10000 - 00000000fec11000 (reserved) BIOS-e820: 00000000fec20000 - 00000000fec21000 (reserved) BIOS-e820: 00000000fee00000 - 00000000fee10000 (reserved) BIOS-e820: 00000000ff800000 - 0000000100000000 (reserved) BIOS-e820: 0000000100000000 - 00000001fffff000 (usable) ACPI: RSDP (v002 HP ) @ 0x00000000000f4f20 ACPI: XSDT (v001 HP A01 0x00000002 Ò 0x0000162e) @ 0x00000000f57fa400 ACPI: FADT (v003 HP A01 0x00000002 Ò 0x0000162e) @ 0x00000000f57fa480 ACPI: MADT (v001 HP 00000083 0x00000002 0x00000000) @ 0x00000000f57fa100 ACPI: SPCR (v001 HP SPCRRBSU 0x00000001 Ò 0x0000162e) @ 0x00000000f57fa200 ACPI: SRAT (v001 HP A01 0x00000001 0x00000000) @ 0x00000000f57fa280 ACPI: DSDT (v001 HP DSDT 0x00000001 MSFT 0x02000001) @ 0x0000000000000000 Scanning NUMA topology in Northbridge 24 Number of nodes 4 (30030) Node 0 MemBase 0000000000000000 Limit 000000007fffffff Node 1 MemBase 0000000080000000 Limit 00000000ffffffff Node 2 MemBase 0000000100000000 Limit 000000017fffffff Node 3 MemBase 0000000180000000 Limit 00000001fffff000 Using 22 for the hash shift. Max addr is 1fffff000 Using node hash shift of 22 Bootmem setup node 0 0000000000000000-000000007fffffff Bootmem setup node 1 0000000080000000-00000000ffffffff Bootmem setup node 2 0000000100000000-000000017fffffff Bootmem setup node 3 0000000180000000-00000001fffff000 On node 0 totalpages: 524287 DMA zone: 4096 pages, LIFO batch:1 Normal zone: 520191 pages, LIFO batch:16 HighMem zone: 0 pages, LIFO batch:1 On node 1 totalpages: 524287 DMA zone: 0 pages, LIFO batch:1 Normal zone: 524287 pages, LIFO batch:16 HighMem zone: 0 pages, LIFO batch:1 On node 2 totalpages: 524287 DMA zone: 0 pages, LIFO batch:1 Normal zone: 524287 pages, LIFO batch:16 HighMem zone: 0 pages, LIFO batch:1 On node 3 totalpages: 524287 DMA zone: 0 pages, LIFO batch:1 Normal zone: 524287 pages, LIFO batch:16 HighMem zone: 0 pages, LIFO batch:1 DMI 2.3 present. ACPI: PM-Timer IO Port: 0x908 ACPI: Local APIC address 0xfee00000 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) Processor #0 15:5 APIC version 16 ACPI: LAPIC (acpi_id[0x01] lapic_id[0x01] enabled) Processor #1 15:5 APIC version 16 ACPI: LAPIC (acpi_id[0x02] lapic_id[0x02] enabled) Processor #2 15:5 APIC version 16 ACPI: LAPIC (acpi_id[0x03] lapic_id[0x03] enabled) Processor #3 15:5 APIC version 16 ACPI: LAPIC (acpi_id[0x04] lapic_id[0x04] disabled) ACPI: LAPIC (acpi_id[0x05] lapic_id[0x05] disabled) ACPI: LAPIC (acpi_id[0x06] lapic_id[0x06] disabled) ACPI: LAPIC (acpi_id[0x07] lapic_id[0x07] disabled) ACPI: LAPIC_NMI (acpi_id[0xff] high edge lint[0x1]) Setting APIC routing to flat ACPI: IOAPIC (id[0x04] address[0xfec00000] gsi_base[0]) IOAPIC[0]: apic_id 4, version 17, address 0xfec00000, GSI 0-23 ACPI: IOAPIC (id[0x05] address[0xfec10000] gsi_base[24]) IOAPIC[1]: apic_id 5, version 17, address 0xfec10000, GSI 24-27 ACPI: IOAPIC (id[0x06] address[0xfec20000] gsi_base[28]) IOAPIC[2]: apic_id 6, version 17, address 0xfec20000, GSI 28-31 ACPI: IOAPIC (id[0x07] address[0xfdc00000] gsi_base[32]) IOAPIC[3]: apic_id 7, version 17, address 0xfdc00000, GSI 32-35 ACPI: IOAPIC (id[0x08] address[0xfdc10000] gsi_base[36]) IOAPIC[4]: apic_id 8, version 17, address 0xfdc10000, GSI 36-39 ACPI: IOAPIC (id[0x09] address[0xfdc20000] gsi_base[40]) IOAPIC[5]: apic_id 9, version 17, address 0xfdc20000, GSI 40-43 ACPI: IOAPIC (id[0x0a] address[0xfdc30000] gsi_base[44]) IOAPIC[6]: apic_id 10, version 17, address 0xfdc30000, GSI 44-47 ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 high edge) ACPI: IRQ0 used by override. ACPI: IRQ2 used by override. ACPI: IRQ9 used by override. Using ACPI (MADT) for SMP configuration information Allocating PCI resources starting at f6000000 (gap: f5800000:8400000) Checking aperture... CPU 0: aperture @ f30e000000 size 32 MB Aperture from northbridge cpu 0 too small (32 MB) No AGP bridge found Your BIOS doesn't leave a aperture memory hole Please enable the IOMMU option in the BIOS setup This costs you 64 MB of RAM Mapping aperture over 65536 KB of RAM @ 4000000 Built 4 zonelists Kernel command line: ro root=LABEL=/ rhgb quiet console=tty0 Initializing CPU#0 PID hash table entries: 4096 (order: 12, 131072 bytes) time.c: Using 3.579545 MHz PM timer. time.c: Detected 2396.906 MHz processor. Console: colour VGA+ 80x25 Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes) Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes) Memory: 8003652k/8388604k available (2115k kernel code, 0k reserved, 1306k data, 208k init) .......
John R Pierce wrote:
whats cat /proc/meminfo say?
# cat /proc/meminfo MemTotal: 6104064 kB MemFree: 1992580 kB Buffers: 2060812 kB Cached: 1515056 kB SwapCached: 0 kB Active: 1913988 kB Inactive: 1793012 kB HighTotal: 0 kB HighFree: 0 kB LowTotal: 6104064 kB LowFree: 1992580 kB SwapTotal: 2031608 kB SwapFree: 2031608 kB Dirty: 100 kB Writeback: 0 kB AnonPages: 131120 kB Mapped: 13216 kB Slab: 137600 kB PageTables: 9004 kB NFS_Unstable: 0 kB Bounce: 0 kB CommitLimit: 5083640 kB Committed_AS: 354988 kB VmallocTotal: 34359738367 kB VmallocUsed: 3196 kB VmallocChunk: 34359734335 kB
what do you see at the top of `dmesg` relating to memory (first 100 or so lines).
Bootdata ok (command line is ro root=/dev/VolGroup00/LogVol00) Linux version 2.6.18-53.1.21.el5xen (mockbuild@builder10.centos.org) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 SMP Tue May 20 10:03:27 EDT 2008 BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000001ef8fb000 (usable) On node 0 totalpages: 2029819 DMA zone: 2029819 pages, LIFO batch:31 DMI 2.5 present. ACPI: RSDP (v002 DELL ) @ 0x00000000000f97a0 ACPI: XSDT (v001 DELL FX09 0x42302e31 AWRD 0x00000000) @ 0x00000000cf5e3080 ACPI: FADT (v003 DELL FX09 0x42302e31 AWRD 0x00000000) @ 0x00000000cf5e7200 ACPI: HPET (v001 DELL FX09 0x42302e31 AWRD 0x00000098) @ 0x00000000cf5e73c0 ACPI: MCFG (v001 DELL FX09 0x42302e31 AWRD 0x00000000) @ 0x00000000cf5e7400 ACPI: SLIC (v001 DELL FX09 0x42302e31 AWRD 0x00000000) @ 0x00000000cf5e7440 ACPI: OSFR (v001 DELL FX09 0x42302e31 AWRD 0x00000000) @ 0x00000000cf5e75c0 ACPI: MADT (v001 DELL FX09 0x42302e31 AWRD 0x00000000) @ 0x00000000cf5e7300 ACPI: SSDT (v001 PmRef CpuPm 0x00003000 INTL 0x20041203) @ 0x00000000cf5e7f60 ACPI: DSDT (v001 DELL AWRDACPI 0x00001000 MSFT 0x03000000) @ 0x0000000000000000 ACPI: Local APIC address 0xfee00000 ACPI: LAPIC (acpi_id[0x00] lapic_id[0x00] enabled) ACPI: LAPIC (acpi_id[0x01] lapic_id[0x02] enabled) ACPI: LAPIC (acpi_id[0x02] lapic_id[0x03] enabled) ACPI: LAPIC (acpi_id[0x03] lapic_id[0x01] enabled) ACPI: LAPIC_NMI (acpi_id[0x00] high edge lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x01] high edge lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x02] high edge lint[0x1]) ACPI: LAPIC_NMI (acpi_id[0x03] high edge lint[0x1]) ACPI: IOAPIC (id[0x04] address[0xfec00000] gsi_base[0]) IOAPIC[0]: apic_id 4, version 32, address 0xfec00000, GSI 0-23 ACPI: INT_SRC_OVR (bus 0 bus_irq 0 global_irq 2 dfl dfl) ACPI: INT_SRC_OVR (bus 0 bus_irq 9 global_irq 9 high level) ACPI: IRQ0 used by override. ACPI: IRQ2 used by override. ACPI: IRQ9 used by override. Setting APIC routing to xen Using ACPI (MADT) for SMP configuration information Allocating PCI resources starting at d0000000 (gap: cf600000:10a00000) Built 1 zonelists. Total pages: 2029819 Kernel command line: ro root=/dev/VolGroup00/LogVol00 Initializing CPU#0 PID hash table entries: 4096 (order: 12, 32768 bytes) Xen reported: 2394.066 MHz processor. Console: colour VGA+ 80x25 Dentry cache hash table entries: 1048576 (order: 11, 8388608 bytes) Inode-cache hash table entries: 524288 (order: 10, 4194304 bytes) Software IO TLB enabled: Aperture: 64 megabytes Kernel range: 0xffff88000a9d9000 - 0xffff88000e9d9000 PCI-DMA: Using software bounce buffering for IO (SWIOTLB) Memory: 7875568k/8119276k available (2358k kernel code, 234816k reserved, 1325k data, 172k init) ...
Ruslan Sivak wrote:
John R Pierce wrote:
whats cat /proc/meminfo say?
# cat /proc/meminfo MemTotal: 6104064 kB ... HighTotal: 0 kB HighFree: 0 kB LowTotal: 6104064 kB LowFree: 1992580 kB ...
Linux version 2.6.18-53.1.21.el5xen (mockbuild@builder10.centos.org) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 SMP Tue May 20 10:03:27 EDT 2008 BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000001ef8fb000 (usable)
that range is about 7.9 GiBytes, so the rest is getting lost somewhere.
I'm unfamiliar with Xens innards..
Am Dienstag, den 10.06.2008, 22:54 -0700 schrieb John R Pierce:
Ruslan Sivak wrote:
John R Pierce wrote:
whats cat /proc/meminfo say?
# cat /proc/meminfo MemTotal: 6104064 kB ... HighTotal: 0 kB HighFree: 0 kB LowTotal: 6104064 kB LowFree: 1992580 kB ...
Linux version 2.6.18-53.1.21.el5xen (mockbuild@builder10.centos.org) (gcc version 4.1.2 20070626 (Red Hat 4.1.2-14)) #1 SMP Tue May 20 10:03:27 EDT 2008 BIOS-provided physical RAM map: Xen: 0000000000000000 - 00000001ef8fb000 (usable)
that range is about 7.9 GiBytes, so the rest is getting lost somewhere.
I'm unfamiliar with Xens innards..
How many VMs are running and how much memory do they consume?
This memory is not shown in DOM0 any more.
The total memory should be visible within xentop.
wkr Henry
On Wed, Jun 11, 2008 at 9:04 AM, henry ritzlmayr centos@rc0.at wrote:
How many VMs are running and how much memory do they consume?
This memory is not shown in DOM0 any more.
The total memory should be visible within xentop.
Or with :
# virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 2333 MHz CPU socket(s): 2 Core(s) per socket: 2 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 10484736 kB
Regards, Tim
Tim Verhoeven wrote:
On Wed, Jun 11, 2008 at 9:04 AM, henry ritzlmayr centos@rc0.at wrote:
How many VMs are running and how much memory do they consume?
This memory is not shown in DOM0 any more.
The total memory should be visible within xentop.
Or with :
# virsh nodeinfo CPU model: x86_64 CPU(s): 4 CPU frequency: 2333 MHz CPU socket(s): 2 Core(s) per socket: 2 Thread(s) per core: 1 NUMA cell(s): 1 Memory size: 10484736 kB
Regards, Tim
While it seems to make sense (and both xentop and virsh nodeinfo) show the right amount of memory, even when I shut down one of the VM's, free and top still think I only have 6GB of ram.
Russ
On Wed, Jun 11, 2008 at 4:46 PM, Ruslan Sivak russ@vshift.com wrote:
While it seems to make sense (and both xentop and virsh nodeinfo) show the right amount of memory, even when I shut down one of the VM's, free and top still think I only have 6GB of ram.
That is normal, the memory that was used by VM's is not automatically returned to the dom0 and therefore won't show when running free and top.
Regards, Tim
Tim Verhoeven wrote:
On Wed, Jun 11, 2008 at 4:46 PM, Ruslan Sivak russ@vshift.com wrote:
While it seems to make sense (and both xentop and virsh nodeinfo) show the right amount of memory, even when I shut down one of the VM's, free and top still think I only have 6GB of ram.
That is normal, the memory that was used by VM's is not automatically returned to the dom0 and therefore won't show when running free and top.
Regards, Tim
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the box went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM. Is there a mini installation of CentOS that I can do that would use less RAM? I've already unchecked all the boxes when installing CentOS. I would like Dom0 to be as small as possible, both due to RAM usage and from a security perspective.
Russ
On Wed, Jun 11, 2008 at 8:36 AM, Ruslan Sivak russ@vshift.com wrote:
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the box went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM. Is there a mini installation of CentOS that I can do that would use less RAM? I've already unchecked all the boxes when installing CentOS. I would like Dom0 to be as small as possible, both due to RAM usage and from a security perspective.
I've not familiarized myself with xen yet, but have you considered VMware Server? I haven't had any serious problems with it, and none at all since v1.0.5 came out (1.0.6 is the current one). Works nicely, stays within its memory allocation, and top et al work as you'd expect them to.
HTH
mhr
MHR wrote:
On Wed, Jun 11, 2008 at 8:36 AM, Ruslan Sivak russ@vshift.com wrote:
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the box went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM. Is there a mini installation of CentOS that I can do that would use less RAM? I've already unchecked all the boxes when installing CentOS. I would like Dom0 to be as small as possible, both due to RAM usage and from a security perspective.
I've not familiarized myself with xen yet, but have you considered VMware Server? I haven't had any serious problems with it, and none at all since v1.0.5 came out (1.0.6 is the current one). Works nicely, stays within its memory allocation, and top et al work as you'd expect them to.
HTH
mhr _______________________________________________
Are you talking about VMware Server 1? Isn't there an issue with only being able to allocate 3.4GB of ram or something to that point? I guess it wouldn't be an issue since I only have 8GB of ram on this box, unless I wanted to allocate a lot of ram to a single process.
I, too, like VMWare Server 1 and have been using it in production under windows. Does it support paravirtualization at all?
I treid, VMWare server 2 beta, and they made it pretty much unusable. The web interface isn't that great, and there is no way to tell it to use an LVM volume. Hopefully they will improve it in the future.
It's too bad that you can't use Xen and VMWare on the same box. I could've ran Windows stuff in VMWare and Linux stuff paravirtualized in Xen.
Russ
On Wed, Jun 11, 2008 at 10:29 AM, Ruslan Sivak russ@vshift.com wrote:
Are you talking about VMware Server 1?
Yes.
Isn't there an issue with only being able to allocate 3.4GB of ram or something to that point? I guess it wouldn't be an issue since I only have 8GB of ram on this box, unless I wanted to allocate a lot of ram to a single process.
Yes, VMS1 only supports up to 3.6GB or memory.
I, too, like VMWare Server 1 and have been using it in production under windows. Does it support paravirtualization at all?
Not according to VMWare - that's up for v2, whenever that comes out.
mhr
Am Mittwoch, den 11.06.2008, 10:06 -0700 schrieb MHR:
On Wed, Jun 11, 2008 at 8:36 AM, Ruslan Sivak russ@vshift.com wrote:
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the box went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM. Is there a mini installation of CentOS that I can do that would use less RAM? I've already unchecked all the boxes when installing CentOS. I would like Dom0 to be as small as possible, both due to RAM usage and from a security perspective.
I've not familiarized myself with xen yet, but have you considered VMware Server? I haven't had any serious problems with it, and none at all since v1.0.5 came out (1.0.6 is the current one). Works nicely, stays within its memory allocation, and top et al work as you'd expect them to.
HTH
mhr
I evaluated VMware Server myself (v1.0.3) and at that time, Disk I/O was pretty bad within a virtual machine. The only solution I found was XEN with paravirtualization. Has there been any progress on that with later releases?
For example:
dd if=/dev/md5 of=/dev/null bs=1M count=1000 on bare metal gave 272 MB/s same within VMware gave only 47,9 MB/s
I know that dd is not a benchmark - but for measuring sequential reads within a system its fair enough for me.
Henry
henry ritzlmayr wrote:
Am Mittwoch, den 11.06.2008, 10:06 -0700 schrieb MHR:
On Wed, Jun 11, 2008 at 8:36 AM, Ruslan Sivak russ@vshift.com wrote:
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the box went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM. Is there a mini installation of CentOS that I can do that would use less RAM? I've already unchecked all the boxes when installing CentOS. I would like Dom0 to be as small as possible, both due to RAM usage and from a security perspective.
I've not familiarized myself with xen yet, but have you considered VMware Server? I haven't had any serious problems with it, and none at all since v1.0.5 came out (1.0.6 is the current one). Works nicely, stays within its memory allocation, and top et al work as you'd expect them to.
HTH
mhr
I evaluated VMware Server myself (v1.0.3) and at that time, Disk I/O was pretty bad within a virtual machine. The only solution I found was XEN with paravirtualization. Has there been any progress on that with later releases?
For example:
dd if=/dev/md5 of=/dev/null bs=1M count=1000 on bare metal gave 272 MB/s same within VMware gave only 47,9 MB/s
I know that dd is not a benchmark - but for measuring sequential reads within a system its fair enough for me.
Henry
This was another reason that I went with Xen, although my testing method at the time was flawed. I've been using hdtune to measure windows performance, but it was giving me 70mb/s across the board. I'm using IOmeter now and I will try to reinstall VMWare to see what the performance difference is.
Russ
On Wed, Jun 11, 2008 at 12:31 PM, henry ritzlmayr centos@rc0.at wrote:
Am Mittwoch, den 11.06.2008, 10:06 -0700 schrieb MHR:
On Wed, Jun 11, 2008 at 8:36 AM, Ruslan Sivak russ@vshift.com wrote:
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the
box
went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM.
Is
there a mini installation of CentOS that I can do that would use less
RAM?
I've already unchecked all the boxes when installing CentOS. I would
like
Dom0 to be as small as possible, both due to RAM usage and from a
security
perspective.
I've not familiarized myself with xen yet, but have you considered VMware Server? I haven't had any serious problems with it, and none at all since v1.0.5 came out (1.0.6 is the current one). Works nicely, stays within its memory allocation, and top et al work as you'd expect them to.
HTH
mhr
I evaluated VMware Server myself (v1.0.3) and at that time, Disk I/O was pretty bad within a virtual machine. The only solution I found was XEN with paravirtualization. Has there been any progress on that with later releases?
For example:
dd if=/dev/md5 of=/dev/null bs=1M count=1000 on bare metal gave 272 MB/s same within VMware gave only 47,9 MB/s
I know that dd is not a benchmark - but for measuring sequential reads within a system its fair enough for me.
Henry
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Does anyone has tested Openvz? I see many hosting providers using Openvz with CentOS, but haven't had the time to tried out yet, I know it's not paravirtualization but maybe someone has been able to use it sucessfully? I have the same issue with RHEL 5.2, just showing 4Gigs of my 6Gigs box.
Cheers,
On Wed, Jun 11, 2008 at 10:31 AM, henry ritzlmayr centos@rc0.at wrote:
I evaluated VMware Server myself (v1.0.3) and at that time, Disk I/O was pretty bad within a virtual machine. The only solution I found was XEN with paravirtualization. Has there been any progress on that with later releases?
I couldn't say - I mainly use VMWare so I can run the two or three Window$ applications I can't get (or can't find for a good price) on Linux. Performance is not really an issue, and most of the disk access I do is via samba to my host disks. I avoid using the virtual disks as much as possible.
HTH.
mhr
MHR wrote:
On Wed, Jun 11, 2008 at 10:31 AM, henry ritzlmayr centos@rc0.at wrote:
I evaluated VMware Server myself (v1.0.3) and at that time, Disk I/O was pretty bad within a virtual machine. The only solution I found was XEN with paravirtualization. Has there been any progress on that with later releases?
I couldn't say - I mainly use VMWare so I can run the two or three Window$ applications I can't get (or can't find for a good price) on Linux. Performance is not really an issue, and most of the disk access I do is via samba to my host disks. I avoid using the virtual disks as much as possible.
I came across this a couple days ago
http://blogs.vmware.com/performance/
May 22, 2008 100,000 I/O Operations Per Second, One ESX Host
Maxing out 500 15k RPM spindles with a single host. I didn't think that was even possible. Granted this is ESX and not VMware server (previously known as GSX), but ESX is pretty cheap these days, the foundation version gets you a ton of stuff minus hot migrations for $999(per 2 proc) (used to be about $3750). I think the enterprise edition (~$5k per 2 proc) is overkill for most uses.
nate
nate wrote:
MHR wrote:
On Wed, Jun 11, 2008 at 10:31 AM, henry ritzlmayr centos@rc0.at wrote:
I evaluated VMware Server myself (v1.0.3) and at that time, Disk I/O was pretty bad within a virtual machine. The only solution I found was XEN with paravirtualization. Has there been any progress on that with later releases?
I couldn't say - I mainly use VMWare so I can run the two or three Window$ applications I can't get (or can't find for a good price) on Linux. Performance is not really an issue, and most of the disk access I do is via samba to my host disks. I avoid using the virtual disks as much as possible.
I came across this a couple days ago
http://blogs.vmware.com/performance/
May 22, 2008 100,000 I/O Operations Per Second, One ESX Host
Maxing out 500 15k RPM spindles with a single host. I didn't think that was even possible. Granted this is ESX and not VMware server (previously known as GSX), but ESX is pretty cheap these days, the foundation version gets you a ton of stuff minus hot migrations for $999(per 2 proc) (used to be about $3750). I think the enterprise edition (~$5k per 2 proc) is overkill for most uses.
nate
If I was going to pay for something, I would probably get XenServer. I really liked the management utility, and I think the performance was OK, I might need to test the performance again. It's also only $1k or so for the standard edition (and have a free express edition, which only supports 4GB of ram).
Russ
On Wed, Jun 11, 2008 at 11:20 AM, nate centos@linuxpowered.net wrote:
I came across this a couple days ago
http://blogs.vmware.com/performance/
May 22, 2008 100,000 I/O Operations Per Second, One ESX Host
Maxing out 500 15k RPM spindles with a single host. I didn't think that was even possible. Granted this is ESX and not VMware server (previously known as GSX), but ESX is pretty cheap these days, the foundation version gets you a ton of stuff minus hot migrations for $999(per 2 proc) (used to be about $3750). I think the enterprise edition (~$5k per 2 proc) is overkill for most uses.
On the day that I can look at a $1000 piece of software and think of it as pretty cheap, I will give that $1000 (or more) instead to the CentOS project.
mhr
Am Mittwoch, den 11.06.2008, 11:36 -0400 schrieb Ruslan Sivak:
Tim Verhoeven wrote:
On Wed, Jun 11, 2008 at 4:46 PM, Ruslan Sivak russ@vshift.com wrote:
While it seems to make sense (and both xentop and virsh nodeinfo) show the right amount of memory, even when I shut down one of the VM's, free and top still think I only have 6GB of ram.
That is normal, the memory that was used by VM's is not automatically returned to the dom0 and therefore won't show when running free and top.
Regards, Tim
I guess it has something to do with the ballooning driver for Dom0. It looks like I just tried to allocation too much memory to DomU and the box went down hard. I think there's a setting in xen to the min amount of memory to go down to, but I'm not sure why Dom0 is using 600mb of RAM. Is there a mini installation of CentOS that I can do that would use less RAM? I've already unchecked all the boxes when installing CentOS. I would like Dom0 to be as small as possible, both due to RAM usage and from a security perspective.
Russ
The option you think of is called dom0-min-mem and can be found in /etc/xen/xend-config.sxp Regarding to a mini installation of CentOS - not that I know of, but you must have some daemons running, since on my installations here DOM0 only consumes 373MB and I have postfix running on DOM0 as well.
Henry
On Tue, Jun 10, 2008 at 5:43 PM, Ruslan Sivak russ@vshift.com wrote:
I'm running CentOS 5.1 with all updates, and the xen kernel. For some reason the OS is not seeing the full amount of ram. #uname -a Linux CentOS-VM-A 2.6.18-53.1.21.el5xen #1 SMP Tue May 20 10:03:27 EDT 2008 x86_64 x86_64 x86_64 GNU/Linux
# free total used free shared buffers cached Mem: 6104064 3445136 2658928 0 1412236 1515032 -/+ buffers/cache: 517868 5586196 Swap: 2031608 0 2031608
On another box with identical hardware, I see # free total used free shared buffers cached Mem: 5038080 1818980 3219100 0 145208 1148608 -/+ buffers/cache: 525164 4512916 Swap: 2031608 152 2031456
Both of these boxes have 8GB of ram. Is there a reason I'm not seeing all of it? Russ
Wonder if this is related to a known issue with the xen kernel.
http://www.centos.org/modules/newbb/viewtopic.php?topic_id=12491&forum=3...
Or
http://bugs.centos.org/view.php?id=1905
You might want to try the grub with a patch offered in there.
Akemi
Akemi Yagi wrote:
Wonder if this is related to a known issue with the xen kernel.
http://www.centos.org/modules/newbb/viewtopic.php?topic_id=12491&forum=3...
Or
http://bugs.centos.org/view.php?id=1905
You might want to try the grub with a patch offered in there.
Akemi
Thank you, I will try that tomorrow. Does anyone know if this is fixed in 5.2?
Russ
On Tue, Jun 10, 2008 at 7:50 PM, Ruslan Sivak russ@vshift.com wrote:
Akemi Yagi wrote:
Wonder if this is related to a known issue with the xen kernel.
http://www.centos.org/modules/newbb/viewtopic.php?topic_id=12491&forum=3...
Or
http://bugs.centos.org/view.php?id=1905
You might want to try the grub with a patch offered in there.
Akemi
Thank you, I will try that tomorrow. Does anyone know if this is fixed in 5.2? Russ
This bug is being tracked at:
https://bugzilla.redhat.com/show_bug.cgi?id=445893
It is apparently not fixed in 5.2. It you are affected by this bug, using the grub offered above is the best solution at this moment. If your issue is something else, you might still want to give a newer (test) kernel a try. Someone got a slightly better result that way (see comment #3 of the bugzilla).
Akemi