*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022. There is a Dojo Summer 2022 https://wiki.centos.org/Events/Dojo/Summer2022 session happening on Thursday, June 17th, that will explain further technical details. *
Hello everyone,
As many of you know, since the beginning of this year we have been reevaluating the future of CentOS CI, as currently the hardware being used for it is out of warranty. This is due to the fact that CentOS CI came from community donations of hardware which were maintained in a best effort manner by our team. With no warranties, when the physical machine dies we have no means to replace it. Right now though, our hardware, due to a lack of warranty, will not be moved with the upcoming data center changes due to data center requirements to have in warranty hardware for supportability. We decided to take this opportunity to modernize our current infrastructure, pushing it to a hybrid cloud environment. Duffy CI will become the main tool from now on, so that we can support the CI workflow and best practices on cloud and for this reason, the current hardware infra will no longer be available soon. However, as an effort to continuously provide resources and support CI best practices for projects, our team is adapting Duffy CI so that we can maintain most of the characteristics of our current, physical-based offering.
At the technical level, what does that mean for you, CI tenants?
-
A new Duffy API service will replace the existing one: while it will be running in compatibility/legacy mode with the previous version, you will need to adapt your workflow to the new API, but more details below -
We will transition to AWS EC2 instances for the aarch64 and x86_64 architectures by default, with a (limited) option to request “metal” instances for projects requiring virtualization for their tests (like KVM/vagrant/etc) -
We will keep a (very small) Power9 infra “on-premise” (AWS does not support ppc64le) for the ppc64le tests (available through a dedicated VPN tunnel) -
The existing OpenShift cluster will be also decommissioned and a new one (hosted in AWS, so without an option to run kubevirt operator nor VMs) will be then used (you will have to migrate from one to the other)
With that being said, tenants can start preparing for the changes to happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed. Worth knowing that not opting in means that your API key will not be migrated and so all your requests to get temporary/ephemeral nodes will be rejected by the new Duffy API.
The maximum decommission deadline of the current hardware infrastructure is December 12th, 2022 and the new Duffy CI will go live in August 2022, so please, complete your migration process by the end of CY22. Reminders of deadlines and of the opt-in requirements will be sent monthly, but your confirmation of opt-in is required by August 2022. When approaching December, reminders about deadlines frequency will increase so that we can ensure effective communication throughout the process.
Here are the steps in which we will migrate CI Infra:
Phase 1 - Deploy Duffy V3 (August 2022)
-
Deploy in legacy/compatibility mode, so existing tenants (that opted in !) can still request duffy nodes the same way (like with 'python-cicoclient') : no change at tenants side, and exactly same hardware for tests (transparent migration) -
New Duffy API endpoint becomes available, and tenants can start adapting their workflows to point to new API (new ‘duffy-cli’ tool coming, with documentation) -
Bare metal and VMs options will be available already through the new API (x86_64, aarch64, ppc64le)
Phase 2 - Hybrid Cloud (October 2022)
-
Legacy/compatibility API endpoint will handover EC2 instances instead of local seamicro nodes (VMs vs bare metal) -
Bare metal options will be available through the new API only -
Legacy seamicro and aarch64/ThunderX hardware are decommissioned -
Only remaining "on-premise" option is ppc64le (local cloud)
Phase 3 - Decommission (December 2022)
-
Legacy/compatibility API deprecated and requests (even for EC2 instances) will no longer be accepted -
All tenants that opted in will be using only EC2 for aarch64/x86_64 and on-premise cloud for ppc64le
OpenShift new deployment planning and timeline
To be defined (deadline for planning and timeline: end of June 2022)
Do not hesitate to reach out if you have any questions. It is worth knowing that there will be a dedicated session about the Future of CentOS CI infra at the next CentOS Dojo happening on June 17h (check Dojo Summer 2022 https://wiki.centos.org/Events/Dojo/Summer2022). That session will be recorded and then available on Youtube but if you have any questions. Feel free to join the CentOS Dojo and reach out to us!
Best regards,
On Wed, 2022-06-15 at 08:49 -0300, Camila Granella wrote:
With that being said, tenants can start preparing for the changes to happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed. Worth knowing that not opting in means that your API key will not be migrated and so all your requests to get temporary/ephemeral nodes will be rejected by the new Duffy API.
The Hyperscale SIG is currently using OpenShift for various CI/CD pipelines, and would like to continue doing so. We're not currently using Duffy (to my knowledge), but we're interested in building some VM-based test pipelines down the road (e.g. for end-to-end testing of our distro spins and our systemd builds), so that's something we'll want to look into as well.
Cheers Davide
On 15/06/2022 17:02, Davide Cavalca via CI-users wrote:
On Wed, 2022-06-15 at 08:49 -0300, Camila Granella wrote:
With that being said, tenants can start preparing for the changes to happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed. Worth knowing that not opting in means that your API key will not be migrated and so all your requests to get temporary/ephemeral nodes will be rejected by the new Duffy API.
The Hyperscale SIG is currently using OpenShift for various CI/CD pipelines, and would like to continue doing so. We're not currently using Duffy (to my knowledge), but we're interested in building some VM-based test pipelines down the road (e.g. for end-to-end testing of our distro spins and our systemd builds), so that's something we'll want to look into as well.
Cheers Davide
Hi Davide,
So that will mean that you'll have to request resources in newer Duffy service to get some EC2 instances for these tests ;-) BTW, giving a dedicated talk at next centos dojo this friday to explain the whole plan : https://wiki.centos.org/Events/Dojo/Summer2022
Normally talk will be recorded and so then available on youtube for people still interested in that (and not able to attend the virtual dojo event)
Hi,
On Wed, Jun 15, 2022 at 5:04 PM Davide Cavalca via CI-users < ci-users@centos.org> wrote:
On Wed, 2022-06-15 at 08:49 -0300, Camila Granella wrote:
With that being said, tenants can start preparing for the changes to happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed. Worth knowing that not opting in means that your API key will not be migrated and so all your requests to get temporary/ephemeral nodes will be rejected by the new Duffy API.
The Hyperscale SIG is currently using OpenShift for various CI/CD pipelines, and would like to continue doing so. We're not currently using Duffy (to my knowledge), but we're interested in building some VM-based test pipelines down the road (e.g. for end-to-end testing of our distro spins and our systemd builds), so that's something we'll want to look into as well.
We provide a service for testing against VMs for CentOS called Testing Farm:
Feel free to reach out to me for details, we support x86_64 and aarch64 architectures on AWS EC2 instances. It could spare you some cycles rolling something your own.
Best regards, /M
Cheers Davide _______________________________________________ CI-users mailing list CI-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
On 6/15/22 13:49, Camila Granella wrote:
With that being said, tenants can start preparing for the changes to happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed. Worth knowing that not opting in means that your API key will not be migrated and so all your requests to get temporary/ephemeral nodes will be rejected by the new Duffy API.
Thanks for the great news overall! We (Plumbers/systemd)[0] are definitely interested in the new Duffy instance and can help with testing it if needed.
Cheers, Frantisek
[0] https://jenkins-systemd.apps.ocp.ci.centos.org
Great, we appreciate it! I'll confirm your opt-in.
On Thu, Jun 16, 2022 at 5:20 AM František Šumšal frantisek@sumsal.cz wrote:
On 6/15/22 13:49, Camila Granella wrote:
With that being said, tenants can start preparing for the changes to
happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed. Worth knowing that not opting in means that your API key will not be migrated and so all your requests to get temporary/ephemeral nodes will be rejected by the new Duffy API.
Thanks for the great news overall! We (Plumbers/systemd)[0] are definitely interested in the new Duffy instance and can help with testing it if needed.
Cheers, Frantisek
[0] https://jenkins-systemd.apps.ocp.ci.centos.org
-- Frantisek Sumsal GPG key ID: 0xFB738CE27B634E4B _______________________________________________ CI-users mailing list CI-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
On 15/06/2022 13:49, Camila Granella wrote: <snip>
Do not hesitate to reach out if you have any questions. It is worth knowing that there will be a dedicated session about the Future of CentOS CI infra at the next CentOS Dojo happening on June 17h (check Dojo Summer 2022 https://wiki.centos.org/Events/Dojo/Summer2022). That session will be recorded and then available on Youtube but if you have any questions. Feel free to join the CentOS Dojo and reach out to us!
Presentation was recorded and is now available on https://www.youtube.com/watch?v=aqgs-3NnRmA
Kind Regards,
Hello Camila,
Camila Granella [2022-06-15 8:49 -0300]:
We will transition to AWS EC2 instances for the aarch64 and x86_64 architectures by default, with a (limited) option to request “metal” instances for projects requiring virtualization for their tests (like KVM/vagrant/etc)
Does that apply to OpenShift as well? I.e. can pods be placed onto a metal cluster (with /dev/kvm access) like they can on the current infra?
With that being said, tenants can start preparing for the changes to happen with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in if you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed.
Our team currently uses the "frontdoor" project on console-openshift-console.apps.ocp.ci.centos.org, where we run part of our CI. We strictly depend on /dev/kvm access in our pods [1], so if it's possible to keep that, I'd like to opt into the new infra. Otherwise, I'll look for another place (I'm currently experimenting with PSI, but this still has a lot of problems).
We don't use "duffy" at all, just plain k8s/OCP.
Thank you!
Martin
[1] the kubevirt operator does not suffice, and nested virt is usually too slow and brittle
Hi Martin,
Apologies for the delay in response but access to /dev/kvm is not something we want to support in the new setup. Sorry we can't provide this for you but if your workflow can change to fit in our infrastructure we would welcome you.
Thanks, Mark
On Mon, Jul 11, 2022 at 8:05 AM Martin Pitt mpitt@redhat.com wrote:
Hello Camila,
Camila Granella [2022-06-15 8:49 -0300]:
We will transition to AWS EC2 instances for the aarch64 and x86_64 architectures by default, with a (limited) option to request “metal” instances for projects requiring virtualization for their tests (like KVM/vagrant/etc)
Does that apply to OpenShift as well? I.e. can pods be placed onto a metal cluster (with /dev/kvm access) like they can on the current infra?
With that being said, tenants can start preparing for the changes to
happen
with the maximum deadline of the end of December 2022 wherein at this point, Duffy API legacy mode will be removed. You are required to opt-in
if
you and/or your team want to use Duffy CI. Projects will only be migrated if they reply to this email confirming that they wish to proceed.
Our team currently uses the "frontdoor" project on console-openshift-console.apps.ocp.ci.centos.org, where we run part of our CI. We strictly depend on /dev/kvm access in our pods [1], so if it's possible to keep that, I'd like to opt into the new infra. Otherwise, I'll look for another place (I'm currently experimenting with PSI, but this still has a lot of problems).
We don't use "duffy" at all, just plain k8s/OCP.
Thank you!
Martin
[1] the kubevirt operator does not suffice, and nested virt is usually too slow and brittle
CI-users mailing list CI-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
Hello Mark,
Mark O'Brien [2022-07-20 15:48 +0100]:
Apologies for the delay in response but access to /dev/kvm is not something we want to support in the new setup. Sorry we can't provide this for you but if your workflow can change to fit in our infrastructure we would welcome you.
Ack, thanks for confirming! We can't do without /dev/kvm, so no need to migrate our project then.
Out of interest, why is it so bad/hard to support? It's not like you need it to burn CPU cycles or allocate memory or so :-)
Thanks,
Martin
On Wed, Jul 20, 2022 at 11:42 AM Martin Pitt mpitt@redhat.com wrote:
Hello Mark,
Mark O'Brien [2022-07-20 15:48 +0100]:
Apologies for the delay in response but access to /dev/kvm is not
something
we want to support in the new setup. Sorry we can't provide this for you but if your workflow can change to
fit
in our infrastructure we would welcome you.
Ack, thanks for confirming! We can't do without /dev/kvm, so no need to migrate our project then.
Out of interest, why is it so bad/hard to support? It's not like you need it to burn CPU cycles or allocate memory or so :-)
FWIW for OpenShift's own upstream CI (based on Prow) we deployed https://github.com/cgwalters/kvm-device-plugin which is a new fork of the minimized infrastructure from KubeVirt needed for this. It runs on a GCP cluster ("build02") and we use it for e.g. testing RHEL CoreOS and other upstream projects (some parts of Fedora CoreOS too). There's some use of the OCP Prow instance for projects that live outside of the github.com/openshift namespace, in e.g. github.com/containers and github.com/coreos, although it's not a focus for the CI team.
But I will support use of the plugin elsewhere and happy to share tips/advice!
On 20/07/2022 17:39, Martin Pitt wrote:
Hello Mark,
Mark O'Brien [2022-07-20 15:48 +0100]:
Apologies for the delay in response but access to /dev/kvm is not something we want to support in the new setup. Sorry we can't provide this for you but if your workflow can change to fit in our infrastructure we would welcome you.
Ack, thanks for confirming! We can't do without /dev/kvm, so no need to migrate our project then.
Out of interest, why is it so bad/hard to support? It's not like you need it to burn CPU cycles or allocate memory or so :-)
Thanks,
Martin
Well, it's also due to the fact that Openshift on EC2 runs on VMs and not on bare-metal, and AWS doesn't support nested virt ...
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
Thank you Niels, confirming your opt-in.
On Mon, Jul 11, 2022 at 1:52 PM Niels de Vos ndevos@redhat.com wrote:
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
CI-users mailing list CI-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
On 11/07/2022 18:52, Niels de Vos wrote:
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
The EC2 VMs we'll provision will not get any extra disk[s] (not planned so far but we might reconsider and adapt as it should be easy). The other option is to ask for a bare-metal node to Duffy and you just continue to provision yourself VMs on top as you are doing right now. What do you think should match your needs ?
On Tue, Jul 26, 2022 at 10:18:54AM +0200, Fabian Arrotin wrote:
On 11/07/2022 18:52, Niels de Vos wrote:
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
The EC2 VMs we'll provision will not get any extra disk[s] (not planned so far but we might reconsider and adapt as it should be easy). The other option is to ask for a bare-metal node to Duffy and you just continue to provision yourself VMs on top as you are doing right now. What do you think should match your needs ?
A bare-metal node is the easiest for us. We might be able to run the tests on a VM, but that usually is way less stable and requires more tuning and correction of the CI jobs as updates cause (temporary) breakage. A VM needs to have sufficient CPU, RAM and storage to that we can create block-devices and run Ceph on top of that (in containers?).
Thanks, Niels
On 26/07/2022 11:12, Niels de Vos wrote:
On Tue, Jul 26, 2022 at 10:18:54AM +0200, Fabian Arrotin wrote:
On 11/07/2022 18:52, Niels de Vos wrote:
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
The EC2 VMs we'll provision will not get any extra disk[s] (not planned so far but we might reconsider and adapt as it should be easy). The other option is to ask for a bare-metal node to Duffy and you just continue to provision yourself VMs on top as you are doing right now. What do you think should match your needs ?
A bare-metal node is the easiest for us. We might be able to run the tests on a VM, but that usually is way less stable and requires more tuning and correction of the CI jobs as updates cause (temporary) breakage. A VM needs to have sufficient CPU, RAM and storage to that we can create block-devices and run Ceph on top of that (in containers?).
Thanks, Niels
So you can still ask for bare-metal (select the correct pool in new duffy api) and so nothing should change for you.
On Tue, Jul 26, 2022 at 12:17:33PM +0200, Fabian Arrotin wrote:
On 26/07/2022 11:12, Niels de Vos wrote:
On Tue, Jul 26, 2022 at 10:18:54AM +0200, Fabian Arrotin wrote:
On 11/07/2022 18:52, Niels de Vos wrote:
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
The EC2 VMs we'll provision will not get any extra disk[s] (not planned so far but we might reconsider and adapt as it should be easy). The other option is to ask for a bare-metal node to Duffy and you just continue to provision yourself VMs on top as you are doing right now. What do you think should match your needs ?
A bare-metal node is the easiest for us. We might be able to run the tests on a VM, but that usually is way less stable and requires more tuning and correction of the CI jobs as updates cause (temporary) breakage. A VM needs to have sufficient CPU, RAM and storage to that we can create block-devices and run Ceph on top of that (in containers?).
Thanks, Niels
So you can still ask for bare-metal (select the correct pool in new duffy api) and so nothing should change for you.
Yes, and we're happy with that. I expect we need to prepare for a VM-only solution in the future, right?
Cheers, Niels
On 26/07/2022 15:19, Niels de Vos wrote:
On Tue, Jul 26, 2022 at 12:17:33PM +0200, Fabian Arrotin wrote:
On 26/07/2022 11:12, Niels de Vos wrote:
On Tue, Jul 26, 2022 at 10:18:54AM +0200, Fabian Arrotin wrote:
On 11/07/2022 18:52, Niels de Vos wrote:
On Wed, Jun 15, 2022 at 08:49:34AM -0300, Camila Granella wrote:
*TL;DR: CentOS CI is going hardwareless and if you wish your project remains using it, we need your opt-in by August 2022.
The Ceph-CSI project is a very happy user of the current CI infrastructure, and we definitely wish to remain using it. We run a set of jobs on a mix of bare-metal systems provided by Duffy and some OpenShift native (containerized).
With recent minikube versions we might be able to run inside a VM, however we do require setting up a (minimal) Ceph cluster per job. The VMs we run on bare-metal get extra disks for the backing storage. If that is an option for the VMs we obtain through Duffy, we should be able to adapt our jobs.
Many thanks! Niels
The EC2 VMs we'll provision will not get any extra disk[s] (not planned so far but we might reconsider and adapt as it should be easy). The other option is to ask for a bare-metal node to Duffy and you just continue to provision yourself VMs on top as you are doing right now. What do you think should match your needs ?
A bare-metal node is the easiest for us. We might be able to run the tests on a VM, but that usually is way less stable and requires more tuning and correction of the CI jobs as updates cause (temporary) breakage. A VM needs to have sufficient CPU, RAM and storage to that we can create block-devices and run Ceph on top of that (in containers?).
Thanks, Niels
So you can still ask for bare-metal (select the correct pool in new duffy api) and so nothing should change for you.
Yes, and we're happy with that. I expect we need to prepare for a VM-only solution in the future, right?
Cheers, Niels
Just to let you know that from today, all EC2 instances that one can request through Duffy API will have a second (unconfigured) EBS volume. So each tenant can then use it the way they want to test things out
Happy testing !
Hello all,
Camila Granella [2022-06-15 8:49 -0300]:
The existing OpenShift cluster will be also decommissioned and a new one (hosted in AWS, so without an option to run kubevirt operator nor VMs) will be then used (you will have to migrate from one to the other)
I know that a few months ago we said that we wouldn't need a tenant on the new OpenShift cluster for the "frontdoor" (Cockpit/composer/subscription-manager UI etc. CI) project. But it turns out it would be good after all, as we still have three small, but important services running on the current cluster: our GitHub webhook and a prometheus/grafana pair for our metrics. These don't take a lot of resources, and don't need /dev/kvm, but they do need to live on the public internet.
So if we could opt into that migration still, that'd be great! Do I need to sign up for that anywhere?
Thank you,
Martin for the Cockpit team
On 29/11/2022 15:08, Martin Pitt wrote:
Hello all,
Camila Granella [2022-06-15 8:49 -0300]:
The existing OpenShift cluster will be also decommissioned and a new one (hosted in AWS, so without an option to run kubevirt operator nor VMs) will be then used (you will have to migrate from one to the other)
I know that a few months ago we said that we wouldn't need a tenant on the new OpenShift cluster for the "frontdoor" (Cockpit/composer/subscription-manager UI etc. CI) project. But it turns out it would be good after all, as we still have three small, but important services running on the current cluster: our GitHub webhook and a prometheus/grafana pair for our metrics. These don't take a lot of resources, and don't need /dev/kvm, but they do need to live on the public internet.
So if we could opt into that migration still, that'd be great! Do I need to sign up for that anywhere?
Thank you,
Martin for the Cockpit team
Hi Martin,
Well, you probably just saw the other mail sent by Camilla about the move to AWS. If you really want to be added there (it's just a group in FAS/ACO) you'd need to create a ticket on the centos-infra tracker (https://pagure.io/centos-infra/issues) : iirc there is even a template that we used to have there for the ocp ci access. We'll then discuss with Camilla about your request :)