Hi, so I got some more time to run through the process of onboarding.
I got a simple job created via JJB that runs directly on the slave. At this point though I'd like to load some custom software (rpmdistro-gitoverlay).
Now my understanding is that we only have a non-root user on the slave VM, and so if we want to do anything that isn't already installed, we should call out to Duffy?
My first question is - has anyone tried out writing an Ansible dynamic inventory script for Duffy? The demo https://github.com/kbsingh/centos-ci-scripts/blob/master/build_python_script... is kind of obviously a poor man's Ansible =)
I have 3 other questions about Duffy. First, might be interesting to investigate a configuration like this:
http://www.projectatomic.io/blog/2015/05/building-and-running-live-atomic/
which basically runs out of RAM directly. This model is particularly well suited to workloads like Duffy where you don't *actually* want the OS to be persistent on disk. Using the disks as swap space instead of xfs/ext4 can be a dramatic speed improvement. (A large part is ignoring fsync and journaling)
Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API? Or for that matter allocating a Kubernetes namespace or OpenShift project?
My workloads for Project Atomic are going to be pretty mixed across all of these actually.
Hi!
I've made a python client, CLI and library to interface with duffy: python-cicoclient. - Docs: http://python-cicoclient.readthedocs.org/en/latest/ - Repo: https://github.com/dmsimard/python-cicoclient
You can install it from source, PyPi or RPM through my copr repository.
If you intend to use Ansible in your jobs, there is an ansible role you might be interested in: https://github.com/redhat-openstack/ansible-role-ci-centos
Otherwise, straight commands like "cico node get" or "cico node done <ssid>" can work too. Just make sure to save the SSID somewhere persistent so you don't leak nodes when jobs fail.
If you want a custom workflow, you can also abstract duffy and import python-cicoclient as a library. For example, the CLI interface just calls out to the wrapper: - https://github.com/dmsimard/python-cicoclient/blob/master/cicoclient/cli.py - https://github.com/dmsimard/python-cicoclient/blob/master/cicoclient/wrapper...
Welcome and good luck, let me know if you have any questions !
David Moreau Simard Senior Software Engineer | Openstack RDO
dmsimard = [irc, github, twitter]
On Tue, Apr 12, 2016 at 4:07 PM, Colin Walters walters@verbum.org wrote:
Hi, so I got some more time to run through the process of onboarding.
I got a simple job created via JJB that runs directly on the slave. At this point though I'd like to load some custom software (rpmdistro-gitoverlay).
Now my understanding is that we only have a non-root user on the slave VM, and so if we want to do anything that isn't already installed, we should call out to Duffy?
My first question is - has anyone tried out writing an Ansible dynamic inventory script for Duffy? The demo https://github.com/kbsingh/centos-ci-scripts/blob/master/build_python_script... is kind of obviously a poor man's Ansible =)
I have 3 other questions about Duffy. First, might be interesting to investigate a configuration like this:
http://www.projectatomic.io/blog/2015/05/building-and-running-live-atomic/
which basically runs out of RAM directly. This model is particularly well suited to workloads like Duffy where you don't *actually* want the OS to be persistent on disk. Using the disks as swap space instead of xfs/ext4 can be a dramatic speed improvement. (A large part is ignoring fsync and journaling)
Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API? Or for that matter allocating a Kubernetes namespace or OpenShift project?
My workloads for Project Atomic are going to be pretty mixed across all of these actually.
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
On Tue, Apr 12, 2016 at 4:07 PM, Colin Walters walters@verbum.org wrote:
Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API?
Ah, for some reason I missed that. An OpenStack environment is definitely in the works, stay tuned.
David Moreau Simard Senior Software Engineer | Openstack RDO
dmsimard = [irc, github, twitter]
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 12/04/16 21:16, David Moreau Simard wrote:
On Tue, Apr 12, 2016 at 4:07 PM, Colin Walters walters@verbum.org wrote:
Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API?
Ah, for some reason I missed that. An OpenStack environment is definitely in the works, stay tuned.
one thing to note here is that this will only give you cloud workloads though - not the baremetal nodes. There has been some conversation around having an openstack-ironic interface to the baremetal nodes and while thats on the roadmap, its not anywhere in the immediate to near future.
Regards
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc
On Tue, Apr 12, 2016 at 4:07 PM, Colin Walters walters@verbum.org wrote:
My first question is - has anyone tried out writing an Ansible dynamic inventory script for Duffy? The demo https://github.com/kbsingh/centos-ci-scripts/blob/master/build_python_script... is kind of obviously a poor man's Ansible =)
I have a slightly fancier version here: https://github.com/purpleidea/mgmt/blob/master/misc/centos-ci.py It does everything I need, but not more. DMS has some fancier stuff if you need that sort of thing, but I haven't tried it.
Cheers, James
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 12/04/16 21:07, Colin Walters wrote:
Now my understanding is that we only have a non-root user on the slave VM, and so if we want to do anything that isn't already installed, we should call out to Duffy?
right, and you should not really be running any actual tests on the slave vm, its shared amongst projects too.
My first question is - has anyone tried out writing an Ansible dynamic inventory script for Duffy? The demo https://github.com/kbsingh/centos-ci-scripts/blob/master/build_python_
script.py
is kind of obviously a poor man's Ansible =)
I have 3 other questions about Duffy. First, might be interesting to investigate a configuration like this:
http://www.projectatomic.io/blog/2015/05/building-and-running-live-ato
mic/
which basically runs out of RAM directly. This model is particularly well suited to workloads like Duffy where you don't *actually* want the OS to be persistent on disk. Using the disks as swap space instead of xfs/ext4 can be a dramatic speed improvement. (A large part is ignoring fsync and journaling)
Is this for the speed to run tests, or are you optimising for speed of redeployment of duffy nodes ?
Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API? Or for that matter allocating a Kubernetes namespace or OpenShift project?
the PaaS SIG is working towards an openshift origin release, once that comes through, we definitely want to have a openshift instance in ci.centos.org infra for folks to consume.
My workloads for Project Atomic are going to be pretty mixed across all of these actually.
The systemd guys ( daniel mack specifically ), also wrote a wrapper :
https://github.com/systemd/systemd-centos-ci
We should likely get all these listed up on the duffy wiki page, and consolidate urls from there, we have a few different choices on clients now.
in the near future, in terms of recommendations, we want to just move to the cico client that DMS is working on, and away from that hack python script.
regards,
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc
On Wed, Apr 13, 2016, at 03:51 AM, Karanbir Singh wrote:
Is this for the speed to run tests, or are you optimising for speed of redeployment of duffy nodes ?
Both! So...currently it looks like we're provisioning 2G of swap, and the rest XFS / at over 400GB.
What I want is to be able to use most of this as tmpfs in some of the workloads.
We can do this by tweaking the kickstart to use a smaller / that is configured to auto-grow.
One alternative is to mix in some CentOS Atomic Host, which are pre-configured in very similar way.
the PaaS SIG is working towards an openshift origin release, once that comes through, we definitely want to have a openshift instance in ci.centos.org infra for folks to consume.
Yes. Related to all of this, I'm guessing that a lot of the workloads (probably especially CentOS 6/5) could be moved underneath Docker inside an Atomic/OpenShift cluster, rather than needing a full baremetal node.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 19/04/16 18:55, Colin Walters wrote:
On Wed, Apr 13, 2016, at 03:51 AM, Karanbir Singh wrote:
Is this for the speed to run tests, or are you optimising for speed of redeployment of duffy nodes ?
Both! So...currently it looks like we're provisioning 2G of swap, and the rest XFS / at over 400GB.
What I want is to be able to use most of this as tmpfs in some of the workloads.
We can do this by tweaking the kickstart to use a smaller / that is configured to auto-grow.
one feature that I've been working on in duffy is to allow users to provide their own kickstart. That way you can request a node, submit a new kickstart against it and then wait out the 5 - 8 minutes it takes to re-deploy; that way duffy does not need to block, and we dont need to change the hot-standby configs
One alternative is to mix in some CentOS Atomic Host, which are pre-configured in very similar way.
the plan was to have atomic host available in the cloud environ, i should have that setup ( based on david's work ) for testing soon.
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc
On Tue, Apr 19, 2016, at 07:47 PM, Karanbir Singh wrote:
one feature that I've been working on in duffy is to allow users to provide their own kickstart. That way you can request a node, submit a new kickstart against it and then wait out the 5 - 8 minutes it takes to re-deploy; that way duffy does not need to block, and we dont need to change the hot-standby configs
Right.
the plan was to have atomic host available in the cloud environ, i should have that setup ( based on david's work ) for testing soon.
OK, but having it available in Duffy would solve my immediate issue too, since due to the default partitioning it's a lot easier to carve out space post-boot.
Regarding the tmpfs and transient node discussion: http://miroslav.suchy.cz/blog/archives/2015/05/28/increase_mock_performance_...
On Thu, Apr 21, 2016, at 10:18 AM, Colin Walters wrote:
OK, but having it available in Duffy would solve my immediate issue too, since due to the default partitioning it's a lot easier to carve out space post-boot.
I was thinking about deploying OpenShift in this, but without being able to either control the partitioning layout or use Atomic Host, we'd be in loopback Docker mode, which friends don't let friends use.
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 27/04/16 15:03, Colin Walters wrote:
On Thu, Apr 21, 2016, at 10:18 AM, Colin Walters wrote:
OK, but having it available in Duffy would solve my immediate issue too, since due to the default partitioning it's a lot easier to carve out space post-boot.
I was thinking about deploying OpenShift in this, but without being able to either control the partitioning layout or use Atomic Host, we'd be in loopback Docker mode, which friends don't let friends use.
we should have a solution to this in the next day or so, the primary one I am driving at is letting folks reinstall their allocated machine ( rapidly ).
regards,
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc