Hi, so I got some more time to run through the process of onboarding.
I got a simple job created via JJB that runs directly on the slave. At
this point though I'd like to load some custom software (rpmdistro-gitoverlay).
Now my understanding is that we only have a non-root user on the slave
VM, and so if we want to do anything that isn't already installed, we should
call out to Duffy?
My first question is - has anyone tried out writing an Ansible dynamic
inventory script for Duffy? The demo https://github.com/kbsingh/centos-ci-scripts/blob/master/build_python_scrip…
is kind of obviously a poor man's Ansible =)
I have 3 other questions about Duffy. First, might be interesting
to investigate a configuration like this:
http://www.projectatomic.io/blog/2015/05/building-and-running-live-atomic/
which basically runs out of RAM directly. This model is particularly
well suited to workloads like Duffy where you don't *actually* want
the OS to be persistent on disk. Using the disks as swap
space instead of xfs/ext4 can be a dramatic speed improvement. (A large part is
ignoring fsync and journaling)
Beyond that, has any thought been given to also supporting e.g. OpenStack as a provisioning API?
Or for that matter allocating a Kubernetes namespace or OpenShift project?
My workloads for Project Atomic are going to be pretty mixed across
all of these actually.
We've seen a bunch of yum errors today with our tests. basically a bunch of these types of errors:
[Errno -1] repomd.xml does not match metalink for epel
Full log at [1]. I'm thinking maybe we should modify our tests to run yum in a loop to account for intermittent errors like this?
[1] - https://ci.centos.org/job/atomicapp-test-docker-pr/67/console
Hi Folks,
This morning we had an unplanned outage of the NGINX proxy inbound to
ci.centos.org.
We were able to reboot the machine, and add some resources to help us
better handle the inbound traffic. Thanks to Fabian for doing that!
We've seen this behavior once or twice before, where inbound http
connections fail intermittently. During those periods and during our
emergency maintenance, connections internally between the slaves and the
jenkins master remained up and no jobs were interrupted.
If you have any questions, please let us know here on-list or in
#centos-devel on freenode.
Cheers!
--
Brian Stinson
CentOS CI Infrastructure Team
So, while working on the LV for the exported dir for rdo-store, I
noticed (and Zabbix complained a lot about this in the last weeks) that
the artifacts LV is getting bigger and bigger.
Actually, the rdo project is more and more using disk space :
760G rdo
It would be good to do some clean-up there, and only keep what's need to
be publicly available (behind http://artifacts.ci.centos.org/rdo/ |
https://ci.centos.org/artifacts/rdo/ )
Happy to implement that at the server itself (through a cron job) as
long as we agree on the criteria (so for example delete everything older
than 30 days)
OTOH, keep in mind that I see multiple .qcow2 images (for undercloud ?)
so those would be also deleted.
Or would you want to delete yourself the unneeded files on your side ?
--
Fabian Arrotin
The CentOS Project | http://www.centos.org
gpg key: 56BEC54E | twitter: @arrfab
Just to let the RDO people be aware that a specific nfs export has been
made available for them on the artifacts node :
artifacts.ci.centos.org:/srv/rdo-store (and will be accessible from
n30.pufty.ci.centos.org)
The underlying LV is 250GB (as discussed) but can be grown later online
if needed.
--
Fabian Arrotin
The CentOS Project | http://www.centos.org
gpg key: 56BEC54E | twitter: @arrfab
Hi Folks,
We're working on an issue with our Gerrit trigger plugins, and we need
to perform a saferestart of Jenkins to restore connection to gerrithub.
You may notice the banner on ci.centos.org coming up soon (We're
expecting this to happen around 11:25 AM Eastern time, 15:25 UTC).
We attempted this last night, but there were a few long-running jobs
that we didn't want to interrupt.
During a saferestart, Jenkins will queue up any incoming jobs and resubmits
them after the restart is finished.
If you have any questions, please let us know here or in #centos-devel
Cheers!
--
Brian Stinson
CentOS CI Infrastructure Team
I heard GitHub plugins now exist. Has anyone prepared some docs or a
quick terminal session of the correct way to get this running?
Once this works, my git tpush [1] script should also automatically work!
Thanks,
James
[1] https://ttboj.wordpress.com/2016/02/16/introducing-git-tpush/
hi,
We've been looking at bringing up a cloud infra inside of ci.centos.org
so that projects are able to test in and against cloud workloads,
without having to first deploy their own!
As some people on this list might know, the RDO effort is looking at
bringing up an RDOCloud in the coming months, this will have the
capacity to run a large number of instances, and also offer a very
comprehensive openstack feature set. Our intention is to collaborate
with them, and essentially use their cloud instance as the cloud
workload host for ci.centos.org - in return we would offer them
baremetal instances on our side, for when and how they want to test
openstack-ironic driven workloads.
This is made possible largely due to the fact that the RDO Cloud is
likely going to come up, hosted in the same Data Center as the main
ci.centos.org infra is, and we would have 10gb ( or better )
interconnect with their setup.
However, this effort is still in PoC stages.
In the mean time, we have a portion of our ci.centos.org infra, that we
are going to onramp as an RDO Cloud in the coming days. I hope to have
this setup for testing by the 28th March. Also, the aim here is going to
be to offer as-little-features as needed to deliver a minimal viable
IaaS service level. ie, Nova + Keystone + Glance + horizon ( and any
hard ascendancies from there ), backed with a ceph storage cluster. We
will aim to setup the network in a way that it maps 1:1 with the
baremetal machines network ( ie, there will be no tenant isolation etc ).
Ofcourse, with the RDOCloud coming up down the road, the lack of
extended features wont be a thing.
I am hoping to deploy enough capacity to run ~ 500 VMs.
Regards
--
Karanbir Singh
+44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh
GnuPG Key : http://www.karan.org/publickey.asc