On 24/11/15 19:14, John Trowbridge wrote: > Howdy folks. > > How feasible do we think $subject is? It can be done, but its not going to be simple. The challenge is mostly around what the other projects are doing. > > For RDO-Manager jobs it is pretty much near zero chance of success on a > machine with 16G of RAM. This is especially true for our HA jobs. I made > a bit of a hack to our centosci provisioner in khaleesi[1] to throw out > all hufty chassis nodes since they only have 16G RAM. This is slightly > wasteful though, and would be better to be something that could be > selected via the API. What we can do for now, is take out hufty.ci from the mix completely and replace it with another pool that has the 32gb ram spec - and keep hufty.ci for failover, testing, dev and reserved nodes setup. so from the api side you will not see nodes from hufty.ci allocated anymore. > > Thanks, > John Trowbridge > > [1] > https://github.com/redhat-openstack/khaleesi/commit/14d19c317c0a3fb6b551f4b1722d528d9ad3156a please dont do this! note that there is only a small set ( typically 10 to 12 ) machines in the warm-cache at any given time for c7/x86_64 and they can all be from the same pool ( hufty even! ) If we need to implement something of this nature, I would prefer to do this on the API service side of things, maybe map it from the apikey:<desired config>; that way it would be able to ensure ( or try to! ) that there are some machines always in a cache'd stage, in a pool thats usable for that apikey. For now, we are able to take hufty.ci out of the loop, so no new allocations are made from that pool. regards -- Karanbir Singh +44-207-0999389 | http://www.karan.org/ | twitter.com/kbsingh GnuPG Key : http://www.karan.org/publickey.asc