On 25/07/17 17:45, Brian Stinson wrote:
On Jul 14 16:25, Fabian Arrotin wrote:
<snip>
A couple of us spoke about this the other day and decided that we would take the following approach to sizing VMs on altarch hardware:
Our Openstack instance, CICO Cloud, has the following VM sizes available:
Name | RAM | Disk | Ephemeral | VCPUs | --------+------+------+-----------+-------+ tiny | 1940 | 10 | 0 | 1 | small | 3875 | 20 | 0 | 2 | medium | 7750 | 40 | 0 | 4 | --------+------+------+-----------+-------+
We will duplicate the same sizes for Libvirt VMs on altarch hardware, but in order to take advantage of the incredible memory density on these machines, we'll be adding a few flavors for libvirt nodes -only-:
Name | RAM | Disk | Ephemeral | VCPUs | --------------+-------+------+-----------+-------+ lram.tiny | 11444 | 10 | 0 | 4 | lram.small | 15258 | 20 | 0 | 8 | xram.tiny | 22888 | 10 | 0 | 4 | xram.small | 38750 | 20 | 0 | 8 | xram.medium | 77500 | 40 | 0 | 16 | --------------+-------+------+-----------+-------+
The aarch64 kit will allow: tiny,small,medium,lram.tiny,lram.small The ppc64le kit will allow: all that you see above
What I'd like from you all is comments about the {l,x}ram sizing. We have enough capacity to host quite a few of these VMs. Since this is easy to change and we haven't opened this up to users yet, I'll continue working on the provisioning side with this scheme in mind.
Cheers!
-- Brian
Well, I don't see why we should go "insane" with the xram.* flavors. Actually in CI we only serve bare-metal nodes (as while it was mentioned multiple times that there is CI cloud, CI users aren't able -yet- to consume those instances, but that's another story) and forr bare-metal, depending on which nodes/chassis they get back, it's either 16Gb or 32Gb. so my point is that we shouldn't go higher than that, at least for the beginning.
I don't know when (for example) RDO will be able to test a deployment in CI, but for sure they'll probably have other needs than vcpus/memory, as they'll have a need for storage (and bigger than 40Gb ?)