On 2016-03-15 18:32, Richard W.M. Jones wrote:
On Mon, Mar 14, 2016 at 03:00:35PM +0000, Gordan Bobic wrote:
On 2016-03-01 22:32, Michael Howard wrote:
On 01/03/2016 22:26, Richard W.M. Jones wrote:
On Mon, Feb 29, 2016 at 03:20:03PM +0000, Michael Howard wrote:
Just to let you know, I can't get this to work. aarch64 is supposed to be binary compatible, with the correct libraries installed, but I'm thinking the cpu isn't.
All I get is 'cannot execute binary file: Exec format error', regardless of what I try.
As I understand it the problem is page size - 64K was chosen by Red Hat for aarch64, where as 4K is the norm on armv7.
Anyway, you can run a 32 bit VM and it works well -- in fact a lot faster than regular 32 bit armv7 hardware.
Yes, with CONFIG_ARM64_4K_PAGES=y and CONFIG_COMPAT=y, 32 bit binaries run fine.
I built a kernel with these options enabled, but chrooting into an armv5tel subtree segfaults immediately. :-(
I may be missing some context here, but is there some reason not to just use a VM?
Performance for one.
It's more predictable because you'll be running the same kernel that the 32 bit environment is expecting.
The aarch64 environment seems no less predictable with a kernel defaulting to 4K pages. There are also other reasons, for example I want to run ZFS on the host and don't want to have to re-export the shares to guests via a network file system.
Also it will (or should) work without you needing to compile your own host kernel.
Well, as far as the host kernel is concerned:
1) I already posted a link with a selection of posts from Linus himself explaining at some length why using large default pages is a bad idea at the best of times (and for all other cases where bigger pages do give us that 3% speed-up we can use hugepages directly instead).
2) The kernel that ships is deprecated and was never a LT kernel (4.2).
So switching to the next LT kernel better configured for practically any task seems advantageous in every way.
I now have docker happily running armv5tel guests.
Gordan