Hi all,
we have talked on the Fedora Infra workshop on Flock about the Jenkins CI our Fedora AltArch team is doing for upstream projects and which we want to move to some public infra. So I've subscribed here to continue with the discussion :-)
I've read the CentOS CI wiki(s) and I'm pretty sure it would work for our use case. My question is how far are you with offering AltArch VMs as Jenkins slaves? In an email few months back I read about aarch64 and ppc64/ppc64le, that would be good. I can work with the Marist College (and Linux Foundation) people to give us access to at least one more s390x guests on their mainframe. This way there would be all major arches covered. But is it possible to connect remote slaves?
What should be the next step from our side? Requesting a new project so we can set the jobs even before we get all slaves in place?
With regards,
Dan
On Sep 05 15:38, Dan Horák wrote:
Hi all,
we have talked on the Fedora Infra workshop on Flock about the Jenkins CI our Fedora AltArch team is doing for upstream projects and which we want to move to some public infra. So I've subscribed here to continue with the discussion :-)
I've read the CentOS CI wiki(s) and I'm pretty sure it would work for our use case. My question is how far are you with offering AltArch VMs as Jenkins slaves? In an email few months back I read about aarch64 and ppc64/ppc64le, that would be good. I can work with the Marist College (and Linux Foundation) people to give us access to at least one more s390x guests on their mainframe. This way there would be all major arches covered. But is it possible to connect remote slaves?
We have ppc64le VMs available for consumption now[1], aarch64 is coming soon. The way we handle Jenkins slaves in CentOS CI is a little bit different. CICO Slaves (there's one per project) act more like a workspace than a traditional slave. Jenkins does it's triggers into the workspace, and then you are responsible for requesting nodes (or altarch VMs) from duffy to do the actual work.
Your code can be checked out in the workspace, and then staged over to the duffy nodes for doing your actual test runs.
Remote workers are a little tricky, let's see if we can do a breakout session on some of our options for s390x.
What should be the next step from our side? Requesting a new project so we can set the jobs even before we get all slaves in place?
Yes! You can find a list of what we need in your ticket on bugs.centos.org on the wiki[2].
With regards,
Dan
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
Cheers!
-- Brian Stinson
[1]: https://wiki.centos.org/QaWiki/CI/Multiarch [2]: https://wiki.centos.org/QaWiki/CI/GettingStarted
On Tue, 5 Sep 2017 09:19:04 -0500 Brian Stinson brian@bstinson.com wrote:
On Sep 05 15:38, Dan Horák wrote:
Hi all,
we have talked on the Fedora Infra workshop on Flock about the Jenkins CI our Fedora AltArch team is doing for upstream projects and which we want to move to some public infra. So I've subscribed here to continue with the discussion :-)
I've read the CentOS CI wiki(s) and I'm pretty sure it would work for our use case. My question is how far are you with offering AltArch VMs as Jenkins slaves? In an email few months back I read about aarch64 and ppc64/ppc64le, that would be good. I can work with the Marist College (and Linux Foundation) people to give us access to at least one more s390x guests on their mainframe. This way there would be all major arches covered. But is it possible to connect remote slaves?
We have ppc64le VMs available for consumption now[1], aarch64 is
is there a plan to support Fedora in the VMs? I guess that would also add ppc64 into the list.
coming soon. The way we handle Jenkins slaves in CentOS CI is a little bit different. CICO Slaves (there's one per project) act more like a workspace than a traditional slave. Jenkins does it's triggers into the workspace, and then you are responsible for requesting nodes (or altarch VMs) from duffy to do the actual work.
yup, it might require some work to adapt the standard Jenkins workflow to this scheme, but we mostly need just "do/update a SCM checkout & rebuild & run make check" as a regular user
Your code can be checked out in the workspace, and then staged over to the duffy nodes for doing your actual test runs.
Remote workers are a little tricky, let's see if we can do a breakout session on some of our options for s390x.
my original idea was having "static" s390x guest(s), but I think it could be easily remotely re-provisioned using s3270 and a startup REXX script, so in theory part of duffy and using the regular process even for s390x workers, just remote
What should be the next step from our side? Requesting a new project so we can set the jobs even before we get all slaves in place?
Yes! You can find a list of what we need in your ticket on bugs.centos.org on the wiki[2].
done
Dan