On Tue, Mar 15, 2016 at 07:49:22PM +0000, Gordan Bobic wrote:
On 2016-03-15 18:32, Richard W.M. Jones wrote:
I may be missing some context here, but is there some reason not to just use a VM?
Performance for one.
Can you precisely quantify that?
It's more predictable because you'll be running the same kernel that the 32 bit environment is expecting.
The aarch64 environment seems no less predictable with a kernel defaulting to 4K pages.
It's more predictable for the 32 bit guest, because the guest will have exactly the kernel it expects, not some random kernel and a chroot.
There are also other reasons, for example I want to run ZFS on the host and don't want to have to re-export the shares to guests via a network file system.
Also it will (or should) work without you needing to compile your own host kernel.
Well, as far as the host kernel is concerned:
- I already posted a link with a selection of posts from Linus himself
explaining at some length why using large default pages is a bad idea at the best of times (and for all other cases where bigger pages do give us that 3% speed-up we can use hugepages directly instead).
- The kernel that ships is deprecated and was never a LT kernel (4.2).
So switching to the next LT kernel better configured for practically any task seems advantageous in every way.
I think RHELSA 7.3 will have a 4.5 kernel. Of course "LT" kernels aren't really relevant for Red Hat, because we spend huge amounts of money supporting our kernels long term.
I now have docker happily running armv5tel guests.
Rich.