On 13/01/16 12:52, Karanbir Singh wrote:
On 19/12/15 18:02, David Moreau Simard wrote:
The bulk of the jobs are offloaded to an ephemeral bare metal node but ansible still runs from the slave. Some jobs do run locally, such as smaller tox jobs.
What's the strategy if we want to scale this ?
At the moment the slaves all run from VM's hosted near the admin and jenkins instances - and are setup manually, and managed manually - this was very much a stop gap arrangement till we can get a better virtualised setup in place. We've been looking at and trying to scope up getting an RDO cloud in place, that could then be used for 3 things :
- making an openstack api available for people who want to just consume
VM's for their workloads
[snip]
- offering up image backed resources for people looking at doing
testing with other OSs, eg what the libvirt and libguestfs folks do at the moment.
We have a dedicated hardware slab ( ~ 24 phy machines worth ) dedicated to this task ( so as to not cut into the ci baremetal pools ), but are waiting on the RH facility folks to get it wired up and dial-toned.
Given the nature and impact of this setup, I am going to try and see if we can speed up delivery of that infra from the present timeline of end Feb '16.
I'd just like to chime in and say that the ability to provision VMs on OpenStack (instead of C6/7 physical hosts) would be a big help for me on migrating the Foreman tests to CentOS CI. We currently run VMs on Rackspace for CentOS, Debian, Fedora and Ubuntu so need a broad selection of images which Duffy + bare metal hosts doesn't provide.
I've been trying to spin up VMs on the physical hosts for each OS, but it's fiddly to set up compared to spinning up a pre-built VM image on OpenStack. (Little networking setup issues, virt-install workarounds, bugs in our tests that assume certain package lists, and the complexity of another provisioning layer.)
Looking forward to this, thanks for the update.