On Feb 3, 2021, at 5:28 PM, Lists <lists at benjamindsmith.com> wrote: > > I had the impression that MacOS' Rosetta II might do what I need That’s rather difficult when the x86 code in question is on the other side of a virtualized CPU. It’s a double translation, you see: real x86 code run on a virtual x86 CPU under your CPU’s virtualization extensions (e.g. Intel VT-x) under an Apple M1 ARM64 variant. That’s not an impossible dance to pull off, but you’ll need three parties coordinating the dance steps if you want a high-fidelity CentOS-on-bare-metal emulation: Intel, Apple, and your VM technology provider of choice. If you’re willing to drop one of those three parties out of the equation, you have alternatives: 1. Full CPU simulation, as with QEMU. This should be able to run x86_64 CentOS on an M1, but it’ll be like the bad old days of software virtualization, back around 2000, where every instruction inside the VM had to be translated into native instructions. 2. Cross-compilation to x86 code under macOS, which allows Rosetta II to take effect, but now you aren’t running under CentOS proper any more. Even if you port over the whole userland you depend on, you’ve still got the macOS kernel under your app, which may differ in significant areas that matter. > I need to have access to a VM that's binary-compatible > with production so that I can make sure it "really really works" before > pushing stuff out. If “really really works” is defined in terms of automated testing — and if not, why not? — then it sounds like you want a CI system, though probably not a CI/CD system, if I read your intent properly. That is, you build and test on macOS with ARM code, commit your changes to whatever release repository you maintain now, the CI system picks that up, tries to build it, runs the tests, and notifies you if anything fails. The resulting binary packages can then be manually pushed to deployment. (It’s that last difference that makes this something other than CI/CD.) Making your code work across CPU types is more work, but it can point out hidden assumptions that are better off excised. For instance, this line of C code has a data race in a multithreaded application: ++i; …even though it compiles to a single Intel CPU instruction, even when ‘i’ is guaranteed to be stored in a register! Whether it bites you on Intel gets you way down into niggly implementation details, but it’s *statistically guaranteed* to bite you on ARM due to its RISC nature, because it’s an explicit load-modify-store sequence requiring 3 or 4 CPU instructions, and that few only if you don’t add write barriers to fix the problem.