[Ci-users] going beyond getting started
David Moreau Simard
dms at redhat.com
Tue Apr 12 20:15:54 UTC 2016
I've made a python client, CLI and library to interface with duffy:
- Docs: http://python-cicoclient.readthedocs.org/en/latest/
- Repo: https://github.com/dmsimard/python-cicoclient
You can install it from source, PyPi or RPM through my copr repository.
If you intend to use Ansible in your jobs, there is an ansible role
you might be interested in:
Otherwise, straight commands like "cico node get" or "cico node done
<ssid>" can work too.
Just make sure to save the SSID somewhere persistent so you don't leak
nodes when jobs fail.
If you want a custom workflow, you can also abstract duffy and import
python-cicoclient as a library.
For example, the CLI interface just calls out to the wrapper:
Welcome and good luck, let me know if you have any questions !
David Moreau Simard
Senior Software Engineer | Openstack RDO
dmsimard = [irc, github, twitter]
On Tue, Apr 12, 2016 at 4:07 PM, Colin Walters <walters at verbum.org> wrote:
> Hi, so I got some more time to run through the process of onboarding.
> I got a simple job created via JJB that runs directly on the slave. At
> this point though I'd like to load some custom software (rpmdistro-gitoverlay).
> Now my understanding is that we only have a non-root user on the slave
> VM, and so if we want to do anything that isn't already installed, we should
> call out to Duffy?
> My first question is - has anyone tried out writing an Ansible dynamic
> inventory script for Duffy? The demo https://github.com/kbsingh/centos-ci-scripts/blob/master/build_python_script.py
> is kind of obviously a poor man's Ansible =)
> I have 3 other questions about Duffy. First, might be interesting
> to investigate a configuration like this:
> which basically runs out of RAM directly. This model is particularly
> well suited to workloads like Duffy where you don't *actually* want
> the OS to be persistent on disk. Using the disks as swap
> space instead of xfs/ext4 can be a dramatic speed improvement. (A large part is
> ignoring fsync and journaling)
> Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API?
> Or for that matter allocating a Kubernetes namespace or OpenShift project?
> My workloads for Project Atomic are going to be pretty mixed across
> all of these actually.
> Ci-users mailing list
> Ci-users at centos.org
More information about the Ci-users