Hello, I'm a developer on Fedora/RHEL and OpenShift. Lately we've been landing a lot of "bootable container" changes in OpenShift core, and there's a lot more to come.
However, as we've been doing about this...I've been saying to people that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be simply pulling a pre-built bootable container image with that kernel (more on this below) - Getting away from injecting so much persistent state by default (both via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems management, I created the https://github.com/containers/bootc project. To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application* containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to figure out this stuff and experiment with it outside of a direct-to-product path. I think a CentOS SIG makes sense for this.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts. - Or, create a new SIG
Personally, I lean towards the latter because honestly I find the naming "Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images (specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
Another way to say it is that I'd love to ship quay.io/centos/centos-boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel: https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7... - and this was specifically motivated by issues we hit in OCP. But again, I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without reprovisioning).
At the same time, bootc already introduces some quite new things that need design iteration; for example: https://github.com/containers/bootc#using-bootc-install - we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561
OK this is already too long, so I'm just going to click send =) Thoughts?
Hi,
On Tue, Mar 7, 2023 at 6:06 PM Colin Walters walters@verbum.org wrote:
Hello, I'm a developer on Fedora/RHEL and OpenShift. Lately we've been landing a lot of "bootable container" changes in OpenShift core, and there's a lot more to come.
However, as we've been doing about this...I've been saying to people that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be simply pulling a pre-built bootable container image with that kernel (more on this below)
- Getting away from injecting so much persistent state by default (both via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems management, I created the https://github.com/containers/bootc project. To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application* containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to figure out this stuff and experiment with it outside of a direct-to-product path. I think a CentOS SIG makes sense for this.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts.
- Or, create a new SIG
Personally, I lean towards the latter because honestly I find the naming "Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
As part of Cloud SIG, I think that's a good reason to look for a more appropriate (or maybe a new) SIG for this effort. Other in the ML may have ideas about other SIGs that may be a better place.
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images (specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
Another way to say it is that I'd love to ship quay.io/centos/centos-boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel: https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7... - and this was specifically motivated by issues we hit in OCP. But again, I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without reprovisioning).
At the same time, bootc already introduces some quite new things that need design iteration; for example: https://github.com/containers/bootc#using-bootc-install - we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561
OK this is already too long, so I'm just going to click send =) Thoughts?
Thanks for this detailed explanation about bootc use cases (I'm eager to test it).
IIUC, the expected deliverables that you have on mind for the SIG are bootc (and maybe others) rpm packages, one or more bootc container images in some official centos namespace in quay and, at some time, maybe an alternative anaconda installer or other tools with support for bootc images?
Best regards,
Alfredo
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On Wed, Mar 8, 2023 at 8:21 AM Alfredo Moralejo Alonso amoralej@redhat.com wrote:
Hi,
On Tue, Mar 7, 2023 at 6:06 PM Colin Walters walters@verbum.org wrote:
Hello, I'm a developer on Fedora/RHEL and OpenShift. Lately we've been
landing a lot of "bootable container" changes in OpenShift core, and there's a lot more to come.
However, as we've been doing about this...I've been saying to people
that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be simply
pulling a pre-built bootable container image with that kernel (more on this below)
- Getting away from injecting so much persistent state by default (both
via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that
apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems
management, I created the https://github.com/containers/bootc project. To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application*
containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to
figure out this stuff and experiment with it outside of a direct-to-product path. I think a CentOS SIG makes sense for this.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit
confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts.
- Or, create a new SIG
Personally, I lean towards the latter because honestly I find the naming
"Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
As part of Cloud SIG, I think that's a good reason to look for a more appropriate (or maybe a new) SIG for this effort. Other in the ML may have ideas about other SIGs that may be a better place.
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images (specifically
https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
Another way to say it is that I'd love to ship
quay.io/centos/centos-boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel: https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7...
- and this was specifically motivated by issues we hit in OCP. But again,
I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without
reprovisioning).
At the same time, bootc already introduces some quite new things that
need design iteration; for example: https://github.com/containers/bootc#using-bootc-install - we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561
OK this is already too long, so I'm just going to click send =) Thoughts?
Thanks for this detailed explanation about bootc use cases (I'm eager to test it).
IIUC, the expected deliverables that you have on mind for the SIG are bootc (and maybe others) rpm packages, one or more bootc container images in some official centos namespace in quay and, at some time, maybe an alternative anaconda installer or other tools with support for bootc images?
Best regards,
Alfredo
I'm wondering if the Alternative Images SIG[1] is the SIG you need. We were thinking of doing containers, but it was farther down on the list of things to get working. The containers we were thinking of were "desktop" containers. Where the application would be something graphical, or even possibly a whole desktop. But I could see being able to create bootc containers with the same infrastructure, once it get's setup.
One thing our SIG does not do is create/build packages. I'm not seeing you say you need new/rebuilt packages. Do you need new/altered packages? Or does everything get done via the setup scripts and configs?
Troy
On Wed, Mar 8, 2023, at 11:39 AM, Troy Dawson wrote:
I'm wondering if the Alternative Images SIG[1] is the SIG you need. We were thinking of doing containers, but it was farther down on the list of things to get working. The containers we were thinking of were "desktop" containers. Where the application would be something graphical, or even possibly a whole desktop. But I could see being able to create bootc containers with the same infrastructure, once it get's setup.
At the current time, the container images being built here are using `rpm-ostree compose image` which is mainly oriented for bootable containers. Though nothing stops one from using it for non-bootable containers such as these too
One thing our SIG does not do is create/build packages. I'm not seeing you say you need new/rebuilt packages. Do you need new/altered packages?
The https://github.com/containers/bootc is project is a net-new package right now. I'd like to manage it in a similar fashion to podman. Today, there is a https://copr.fedorainfracloud.org/coprs/rhcontainerbot/bootc/ COPR but which is under the same "rhcontainerbot" namespace as is used by podman developers.
podman is shipped in RHEL though, whereas bootc is not (yet).
I think though the SIG would probably do builds in the same way as other SIGs like hyperscale for the RPMs.
Or does everything get done via the setup scripts and configs?
The less bash scripts the better...almost all of the net-new code is Rust here.
On Wed, Mar 8, 2023, at 11:20 AM, Alfredo Moralejo Alonso wrote:
As part of Cloud SIG, I think that's a good reason to look for a more appropriate (or maybe a new) SIG for this effort. Other in the ML may have ideas about other SIGs that may be a better place.
Thanks, useful to hear agreement there. That said, I am sure this SIG and Cloud would be coordinating because deployment of managed infrastructure systems (Open* and KubeVirt e.g.) and also stanadlone cloud nodes are quite important use cases for bootc. They're just not the only ones.
To xref for example we're having an interesting debate about the intersection of bootc and kubevirt over here: https://groups.google.com/g/kubevirt-dev/c/K-jNJL_Y9bA/m/ZTH78OqFBAAJ
IIUC, the expected deliverables that you have on mind for the SIG are bootc (and maybe others) rpm packages, one or more bootc container images in some official centos namespace in quay and, at some time, maybe an alternative anaconda installer or other tools with support for bootc images?
Yep, exactly! But beyond that:
- Documentation and best practices; e.g. today we have https://github.com/coreos/layering-examples but almost none of that is at all specific to "CoreOS". I should be able to run Ansible in a Dockerfile to configure my desktop system or my IoT system too that derive from CentOS 9. - CI integration is also quite important to me, I'd love to integrate with MRs to existing packages for example. An excellent example of this is https://src.fedoraproject.org/rpms/openssh/pull-request/39 where the openssh maintainer made a change which is just fundamentally incompatible with the way we do image based updates, and I'd like to make sure that "can ssh to machine after it upgrades in image mode" is a basic test run on important packages in C9S.
Upfront apologies for being forced to use Outlook to respond. I tried to make it right. Comments inline (If I did it right).
On 3/7/23, 9:01 AM, "CentOS-devel on behalf of Colin Walters" <centos-devel-bounces@centos.org mailto:centos-devel-bounces@centos.org on behalf of walters@verbum.org mailto:walters@verbum.org> wrote:
[. . .]
However, as we've been doing about this...I've been saying to people that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be simply pulling a pre-built bootable container image with that kernel (more on this below)
- Getting away from injecting so much persistent state by default (both via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems management, I created the https://github.com/containers/bootc https://github.com/containers/bootc project. To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application* containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to figure out this stuff and experiment with it outside of a direct-to-product path. I think a CentOS SIG makes sense for this.
I have a lot to learn here so, my curiosity is piqued.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts.
- Or, create a new SIG
I don't see a reason to support another SIG for this image model. You are talking about supporting a different kernel though and that might (though I suspect it is not) be a greater burden on image builds. Personally, I have high confidence that if we run into pipeline issues, you are going to be able to provide a lot os support there.
Personally, I lean towards the latter because honestly I find the naming "Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
Cloud, to me, is focused around that managed experience that I think you are describing, be it isolated to a desktop or a raspberry-pi. Sure there are much deeper layers of abstraction to use, but I think that those are great environments for experimentation that later ends up feeding a larger cloud experience. I think that this is a good fit.
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images https://github.com/cgwalters/bootc-demo-base-images (specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
It doesn't seem to me that you think this is going to be in "demo" for that much longer and you have high confidence that this will be a better fit ultimately for the community. Recent discussions have also pointed me in that direction.
I have heard other advanced developers discuss getting to this lower level in my own world and I would love to see if you can make it work faster with the support of the team. Selfishly, I'd also like to have your help on what you might consider misleading about the word "cloud".
Another way to say it is that I'd love to ship quay.io/centos/centos-boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel: https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7... https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7424ed536a896293fff8287d - and this was specifically motivated by issues we hit in OCP. But again, I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without
reprovisioning).
At the same time, bootc already introduces some quite new things that need design iteration; for example: https://github.com/containers/bootc#using-bootc-install https://github.com/containers/bootc#using-bootc-install - we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561 https://github.com/rhinstaller/anaconda/pull/4561
[. . . ] _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org mailto:CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel https://lists.centos.org/mailman/listinfo/centos-devel
On Thu, Mar 9, 2023, at 10:46 AM, Duncan, David via CentOS-devel wrote:
I have a lot to learn here so, my curiosity is piqued.
I think a "magic moment" is actually trying out the "manage my OS via container build git-ops" flow. It's not just the mechanics. It's being able to apply every tool and technique one knows about application container images to the problem domain. The "git push" -> automatic container build -> push to registry happening on the server, then on the client you can just `bootc upgrade` and get that change.
I don't see a reason to support another SIG for this image model. You are talking about supporting a different kernel though and that might (though I suspect it is not) be a greater burden on image builds.
Let's be very clear, while I mentioned kernel-rt there is no separate kernel involved here, and all this technology works with the kernel options enabled today in the C9S kernel. [I don't want to digress but I'm excited about the composefs/overlayfs-verity work but it's not required]
Cloud, to me, is focused around that managed experience that I think you are describing, be it isolated to a desktop or a raspberry-pi. Sure there are much deeper layers of abstraction to use, but I think that those are great environments for experimentation that later ends up feeding a larger cloud experience. I think that this is a good fit.
Hmmm. If cloud extends to (on premise, physical, not actually cloud hosted) desktops and raspberry-pi, then where does cloud stop?
It doesn't seem to me that you think this is going to be in "demo" for that much longer and you have high confidence that this will be a better fit ultimately for the community. Recent discussions have also pointed me in that direction.
I hope so, yes! A key point in the creation of bootc is that the "bootc upgrade" part is actually just a fresh new coat of paint on top of underlying infrastructure that has been there a long time (ostree is about 10 years old), and it also crucially builds on top of the github.com/containers/image library which is also battle tested. The "bridge" code between ostree and the containers stack is over 1 year old now; young, and we've hit bugs (e.g. https://github.com/ostreedev/ostree-rs-ext/issues/405 is a great example) but it definitely does work and in fact we are shipping it in OpenShift 4.12 by default for OS updates today.
OTOH, `bootc install` is much newer and much less battle tested (but to write it I reused a lot of code we had around coreos land).
I have heard other advanced developers discuss getting to this lower level in my own world and I would love to see if you can make it work faster with the support of the team. Selfishly, I'd also like to have your help on what you might consider misleading about the word "cloud".
At a practical level today, "Cloud" to me is usually in this context another term for IaaS virtualization, and guest VMs that are assumed to use cloud-init. Obviously the term is vague and people use it to also speak of general SaaS offerings etc. But I hope you'd agree that it's less common to use it to speak of bare metal deployments - or at least, where it is it's usually as a *provider* of a cloud (e.g. OpenStack/Kube (particularly with KubeVirt) or perhaps at smaller scale the Proxmox etc.)
As I mentioned in the kubevirt thread, it is an explicit toplevel goal to have bootc go where podman goes, and work the same way podman works. It's not an accident that it lives in github.com/containers and not some other org.
Is podman part of "cloud"? Yes, and no. But in the same way as we (I think most here would agree) think of being able to run application containers as just part of the OS, being able to *boot* containers is also just an operating system feature, not a cloud feature.
On Tue, Mar 7, 2023, at 12:00 PM, Colin Walters wrote:
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images (specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
To make this less abstract: https://github.com/containers/bootc/pull/75#issuecomment-1466357363 is something this SIG would debate, not just me.
Hey Colin,
I'm looking thru my email for stuff that might have gotten dropped. It looks like there was never a resolution to this. Is there anything I can do to move things along?
-- Shaun
On Tue, 2023-03-07 at 12:00 -0500, Colin Walters wrote:
Hello, I'm a developer on Fedora/RHEL and OpenShift. Lately we've been landing a lot of "bootable container" changes in OpenShift core, and there's a lot more to come.
However, as we've been doing about this...I've been saying to people that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be
simply pulling a pre-built bootable container image with that kernel (more on this below)
- Getting away from injecting so much persistent state by default
(both via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems management, I created the https://github.com/containers/bootc%C2%A0project.%C2%A0 To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application* containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to figure out this stuff and experiment with it outside of a direct-to- product path. I think a CentOS SIG makes sense for this.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit
confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts.
- Or, create a new SIG
Personally, I lean towards the latter because honestly I find the naming "Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images%C2%A0(specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
Another way to say it is that I'd love to ship quay.io/centos/centos- boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel: https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7... - and this was specifically motivated by issues we hit in OCP. But again, I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without reprovisioning).
At the same time, bootc already introduces some quite new things that need design iteration; for example: https://github.com/containers/bootc#using-bootc-install%C2%A0- we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561
OK this is already too long, so I'm just going to click send =) Thoughts? _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Hi Colin, Hi Shaun!
This thread had dropped off my radar too until last week.
I think it'd be very interesting to build bootc for CentOS Stream, and looking ahead a bit, to experiment with building a bootc-based OKD/SCOS.
I went ahead and added Colin to the Cloud SIG group, and I also requested creation of a dist-git repo for bootc: https://git.centos.org/rpms/bootc (alternatively, we could also create a repo for it in https://gitlab.com/CentOS/cloud/rpms)
In case it helps anyone, here are some working notes I took for building other RPMs for the Cloud SIG on CBS: https://hackmd.io/Cfzd-r-5QKaFLIP-iCog0A?view
Happy to support this effort, please let me know if I can help in any way :)
Christian
On Wed, Jun 28, 2023 at 4:04 PM Shaun McCance shaunm@redhat.com wrote:
Hey Colin,
I'm looking thru my email for stuff that might have gotten dropped. It looks like there was never a resolution to this. Is there anything I can do to move things along?
-- Shaun
On Tue, 2023-03-07 at 12:00 -0500, Colin Walters wrote:
Hello, I'm a developer on Fedora/RHEL and OpenShift. Lately we've been landing a lot of "bootable container" changes in OpenShift core, and there's a lot more to come.
However, as we've been doing about this...I've been saying to people that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be
simply pulling a pre-built bootable container image with that kernel (more on this below)
- Getting away from injecting so much persistent state by default
(both via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems management, I created the https://github.com/containers/bootc project. To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application* containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to figure out this stuff and experiment with it outside of a direct-to- product path. I think a CentOS SIG makes sense for this.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit
confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts.
- Or, create a new SIG
Personally, I lean towards the latter because honestly I find the naming "Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images (specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
Another way to say it is that I'd love to ship quay.io/centos/centos- boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel:
https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7...
- and this was specifically motivated by issues we hit in OCP. But
again, I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without reprovisioning).
At the same time, bootc already introduces some quite new things that need design iteration; for example: https://github.com/containers/bootc#using-bootc-install - we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561
OK this is already too long, so I'm just going to click send =) Thoughts? _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Welcome to Cloud SiG Colin and bootc!:)
*Amy Marrich*
She/Her/Hers
Principal Technical Marketing Manager - Cloud Platforms
Red Hat, Inc https://www.redhat.com/
amy@redhat.com
Mobile: 954-818-0514
Slack: amarrich
IRC: spotz https://www.redhat.com/
On Thu, Jun 29, 2023 at 9:15 AM Christian Glombek cglombek@redhat.com wrote:
Hi Colin, Hi Shaun!
This thread had dropped off my radar too until last week.
I think it'd be very interesting to build bootc for CentOS Stream, and looking ahead a bit, to experiment with building a bootc-based OKD/SCOS.
I went ahead and added Colin to the Cloud SIG group, and I also requested creation of a dist-git repo for bootc: https://git.centos.org/rpms/bootc (alternatively, we could also create a repo for it in https://gitlab.com/CentOS/cloud/rpms)
In case it helps anyone, here are some working notes I took for building other RPMs for the Cloud SIG on CBS: https://hackmd.io/Cfzd-r-5QKaFLIP-iCog0A?view
Happy to support this effort, please let me know if I can help in any way :)
Christian
On Wed, Jun 28, 2023 at 4:04 PM Shaun McCance shaunm@redhat.com wrote:
Hey Colin,
I'm looking thru my email for stuff that might have gotten dropped. It looks like there was never a resolution to this. Is there anything I can do to move things along?
-- Shaun
On Tue, 2023-03-07 at 12:00 -0500, Colin Walters wrote:
Hello, I'm a developer on Fedora/RHEL and OpenShift. Lately we've been landing a lot of "bootable container" changes in OpenShift core, and there's a lot more to come.
However, as we've been doing about this...I've been saying to people that I wish I had a time machine to go back and do bootable containers from the start. There's a lot of things we're doing today that I think we should stop doing, e.g.:
- Switching to kernel-rt by fiddling with each node; we should be
simply pulling a pre-built bootable container image with that kernel (more on this below)
- Getting away from injecting so much persistent state by default
(both via Ignition and outside of it)
And crucially, I think we should be developing tools and techniques that apply *outside* of Kubernetes/OpenShift and also work well with it. To be direct, I'd like to eventually productize some of what's happening here in RHEL, not in OpenShift.
As part of this (potential) re-architecture of how we think of systems management, I created the https://github.com/containers/bootc project. To be direct: If successful, I think bootc will be the successor to (rpm-)ostree. It's also intended to much more closely align with the github.com/containers organization.
A simple way to think of this is: One can (build and) run *application* containers with podman; and these containers can also be run in e.g. Kubernetes/OpenShift. One can build *bootable* containers using any tooling (including podman build), but *running* them is via bootc on the end machine. bootc understands kernels etc.
But there's a lot to figure out here - and I want to have a space to figure out this stuff and experiment with it outside of a direct-to- product path. I think a CentOS SIG makes sense for this.
So what I'd like to do is either:
- Add a new effort to the Cloud SIG, which currently (IMO a bit
confusingly) hosts OpenStack/RDO and OpenShift/OKD things which would be a 3rd thing. The bootc work would then be the "base OS" split for OKD/SCOS. But of course, nothing stops one from building bootable host images that are instead designed to be RDO/OpenStack hosts.
- Or, create a new SIG
Personally, I lean towards the latter because honestly I find the naming "Cloud" to be misleading - bootc is also intended to be useful for standalone, non-cloud-infrastructure settings (such as desktops and IoT).
Specifically, I'd like to transfer the existing code that lives in https://github.com/cgwalters/bootc-demo-base-images (specifically https://github.com/cgwalters/bootc-demo-base-images/blob/main/c9s.yaml ) into something CentOS-affiliated and explicitly maintained by a team. (Though I'm not super excited to move it to pagure like at least some other SIG content, but let's not get distracted by git hosting too much here).
Another way to say it is that I'd love to ship quay.io/centos/centos- boot:stream9 (notice the -boot). Or failing that, it'd be quay.io/centos-boot/centos-boot:stream9 or so. There's a *lot* to discuss in terms of what actually goes in these base images, and also ensuring it's equally ergonomic for users to build their own base images. So really it's very likely there wouldn't be just *one* base image. In fact, I recently introduced a -rt variant with the RT kernel:
https://github.com/cgwalters/bootc-demo-base-images/commit/68afb072a5a1396c7...
- and this was specifically motivated by issues we hit in OCP. But
again, I want to have a space where we try to do more of a "clean(er) slate" approach for a while, with notes "not for production use" - for a while. Everything done here though *is* made with that as an explicit goal though (e.g. it's a toplevel design goal too that existing ostree-based systems can be seamlessly switched to be container-based without reprovisioning).
At the same time, bootc already introduces some quite new things that need design iteration; for example: https://github.com/containers/bootc#using-bootc-install - we ship tooling such that a container can install itself (without going through a raw disk image as is used by both OCP and Edge deployments today). And at the same time, I'd like to aim to get the Anaconda changes to install these bootable containers in https://github.com/rhinstaller/anaconda/pull/4561
OK this is already too long, so I'm just going to click send =) Thoughts? _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel