Warren: Thanks for the good info and link. On Wed, Nov 18, 2015 at 4:41 PM, Warren Young <wyml at etr-usa.com> wrote: > On Nov 18, 2015, at 1:20 PM, Kwan Lowe <kwan.lowe at gmail.com> wrote: >> >> Because of caching, from VMWare's perspective, all Linux memory is >> being "used”. > > Nope. VMware’s memory ballooning feature purposely keeps some of the guest’s RAM locked away from the kernel. This is where RAM comes from when another guest needs more physical RAM than it currently has access to: > > https://blogs.vmware.com/virtualreality/2008/10/memory-overcomm.html Hmm.. I may be misunderstanding how the balloon driver is working... I'm looking at section 3.3 in this guide: https://www.vmware.com/files/pdf/perf-vsphere-memory_management.pdf When a guest starts up, the cached memory is very low. This is reflected in the VMWare hypervisor view that shows a small percentage of host memory being used. After disk activity, the host memory allocation grows to the point that it's allocating all the configured memory in the hypervisor view. The guest 'free' still shows the majority of memory as available (though "cached"). "vmware-toolbox-cmd stat balloon" reports 0MB have been ballooned on these instances. >From the PDF above, it seems that only under memory pressure on the hypervisor level does the ballooning kick. Unfortunately, I don't have a way to safely test this. > > There are downsides. > > One is that pages locked up by the balloon driver aren’t being used by Linux’s buffer cache. But on the other hand, the hypervisor itself fulfills some of that role, which is why rebooting a VM guest is typically much faster than rebooting the same OS on the same bare hardware. > This is interesting. We may be double-caching then if VMWare host is also doing some caching. > Another, of course, is that oversubscription risks running out of RAM, if all of the guests decide to try and use all the RAM the host told them it gave. All of the guests end up being forced to deflate their balloons until there is no more balloon memory left. > >> The increase in vm density is an acceptable tradeoff. > > Instead of oversubscribing the real RAM of the system, consider starting and stopping VMs at need, so that only a subset of them are running at a given time. That lets you host more VMs underneath a given hypervisor than would run simultaneously, as long as you don’t need too many of the VMs at once. > > This patterns works well for a suite of test VMs, since you probably don’t need to test all configurations in parallel. You might need only one or two of the guests at any given time. This is a possibility. It will be a hard sell but may work for some. > >> 1) What options are available in CentOS to limit the page cache? > > Again, you should not be tuning the Linux’s virtual memory manager to make the VM host happy. That’s one of the jobs VMware Tools performs. Agreed.. I don't want to do too much on the guest side but we're getting heat to increase density. This is caused by some app owners that throw memory at systems as a first step in troubleshooting. :D Thanks again for your feedback.. Kwan