Hi,
Just starting a thread on the list so that various projects having a need to test their builds on AltArch (aarch64/ppc64/ppc64le) within CI are aware of the following :
Thanks to Brian, who coordinated with several folks, we'll be able soon to provide access to aarch64/ppc64/ppc64le nodes within ci.centos.org.
So far we have configured/deployed/tested ppc64/ppc64le in the environment, so that we can automate the whole thing happening behind the scene (basically ansible being called for this, the same way it's actually done for x86_64).
The last thing to do is now to make Duffy (https://wiki.centos.org/QaWiki/CI/Duffy) aware of those "other arches" nodes to be able to - hand over those nodes on specific request - reinstall those automatically by calling correct playbooks
Worth noting that due to the amount of physical nodes for those AltArches, we'll not be able to provide you access to bare metal nodes, but rather to VMs running on those (at least for ppc64/ppc64le).
That shouldn't be a problem as we'll be able to distribute resources so that you'll have multiple vcpus per VM, and for Power8, our hypervisors are installed with CentOS 7 ppc64le, that uses the kvm_hv module. That then means that it's possible to use nested virt (if needed) as it will then use the kvm_pr for this, slower, but works (http://wiki.qemu.org/Documentation/Platforms/POWER)
If you have remark/comments, feel free !
On Jul 14 16:25, Fabian Arrotin wrote:
Hi,
Just starting a thread on the list so that various projects having a need to test their builds on AltArch (aarch64/ppc64/ppc64le) within CI are aware of the following :
Thanks to Brian, who coordinated with several folks, we'll be able soon to provide access to aarch64/ppc64/ppc64le nodes within ci.centos.org.
So far we have configured/deployed/tested ppc64/ppc64le in the environment, so that we can automate the whole thing happening behind the scene (basically ansible being called for this, the same way it's actually done for x86_64).
The last thing to do is now to make Duffy (https://wiki.centos.org/QaWiki/CI/Duffy) aware of those "other arches" nodes to be able to
- hand over those nodes on specific request
- reinstall those automatically by calling correct playbooks
Worth noting that due to the amount of physical nodes for those AltArches, we'll not be able to provide you access to bare metal nodes, but rather to VMs running on those (at least for ppc64/ppc64le).
That shouldn't be a problem as we'll be able to distribute resources so that you'll have multiple vcpus per VM, and for Power8, our hypervisors are installed with CentOS 7 ppc64le, that uses the kvm_hv module. That then means that it's possible to use nested virt (if needed) as it will then use the kvm_pr for this, slower, but works (http://wiki.qemu.org/Documentation/Platforms/POWER)
If you have remark/comments, feel free !
-- Fabian Arrotin The CentOS Project | http://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
Ci-users mailing list Ci-users@centos.org https://lists.centos.org/mailman/listinfo/ci-users
A couple of us spoke about this the other day and decided that we would take the following approach to sizing VMs on altarch hardware:
Our Openstack instance, CICO Cloud, has the following VM sizes available:
Name | RAM | Disk | Ephemeral | VCPUs | --------+------+------+-----------+-------+ tiny | 1940 | 10 | 0 | 1 | small | 3875 | 20 | 0 | 2 | medium | 7750 | 40 | 0 | 4 | --------+------+------+-----------+-------+
We will duplicate the same sizes for Libvirt VMs on altarch hardware, but in order to take advantage of the incredible memory density on these machines, we'll be adding a few flavors for libvirt nodes -only-:
Name | RAM | Disk | Ephemeral | VCPUs | --------------+-------+------+-----------+-------+ lram.tiny | 11444 | 10 | 0 | 4 | lram.small | 15258 | 20 | 0 | 8 | xram.tiny | 22888 | 10 | 0 | 4 | xram.small | 38750 | 20 | 0 | 8 | xram.medium | 77500 | 40 | 0 | 16 | --------------+-------+------+-----------+-------+
The aarch64 kit will allow: tiny,small,medium,lram.tiny,lram.small The ppc64le kit will allow: all that you see above
What I'd like from you all is comments about the {l,x}ram sizing. We have enough capacity to host quite a few of these VMs. Since this is easy to change and we haven't opened this up to users yet, I'll continue working on the provisioning side with this scheme in mind.
Cheers!
-- Brian
On 25/07/17 17:45, Brian Stinson wrote:
On Jul 14 16:25, Fabian Arrotin wrote:
<snip>
A couple of us spoke about this the other day and decided that we would take the following approach to sizing VMs on altarch hardware:
Our Openstack instance, CICO Cloud, has the following VM sizes available:
Name | RAM | Disk | Ephemeral | VCPUs | --------+------+------+-----------+-------+ tiny | 1940 | 10 | 0 | 1 | small | 3875 | 20 | 0 | 2 | medium | 7750 | 40 | 0 | 4 | --------+------+------+-----------+-------+
We will duplicate the same sizes for Libvirt VMs on altarch hardware, but in order to take advantage of the incredible memory density on these machines, we'll be adding a few flavors for libvirt nodes -only-:
Name | RAM | Disk | Ephemeral | VCPUs | --------------+-------+------+-----------+-------+ lram.tiny | 11444 | 10 | 0 | 4 | lram.small | 15258 | 20 | 0 | 8 | xram.tiny | 22888 | 10 | 0 | 4 | xram.small | 38750 | 20 | 0 | 8 | xram.medium | 77500 | 40 | 0 | 16 | --------------+-------+------+-----------+-------+
The aarch64 kit will allow: tiny,small,medium,lram.tiny,lram.small The ppc64le kit will allow: all that you see above
What I'd like from you all is comments about the {l,x}ram sizing. We have enough capacity to host quite a few of these VMs. Since this is easy to change and we haven't opened this up to users yet, I'll continue working on the provisioning side with this scheme in mind.
Cheers!
-- Brian
Well, I don't see why we should go "insane" with the xram.* flavors. Actually in CI we only serve bare-metal nodes (as while it was mentioned multiple times that there is CI cloud, CI users aren't able -yet- to consume those instances, but that's another story) and forr bare-metal, depending on which nodes/chassis they get back, it's either 16Gb or 32Gb. so my point is that we shouldn't go higher than that, at least for the beginning.
I don't know when (for example) RDO will be able to test a deployment in CI, but for sure they'll probably have other needs than vcpus/memory, as they'll have a need for storage (and bigger than 40Gb ?)
On Jul 26 15:04, Fabian Arrotin wrote:
On 25/07/17 17:45, Brian Stinson wrote:
On Jul 14 16:25, Fabian Arrotin wrote:
<snip> > > > A couple of us spoke about this the other day and decided that we would > take the following approach to sizing VMs on altarch hardware: > > Our Openstack instance, CICO Cloud, has the following VM sizes > available: > > Name | RAM | Disk | Ephemeral | VCPUs | > --------+------+------+-----------+-------+ > tiny | 1940 | 10 | 0 | 1 | > small | 3875 | 20 | 0 | 2 | > medium | 7750 | 40 | 0 | 4 | > --------+------+------+-----------+-------+ > > We will duplicate the same sizes for Libvirt VMs on altarch hardware, > but in order to take advantage of the incredible memory density on > these machines, we'll be adding a few flavors for libvirt nodes -only-: > > Name | RAM | Disk | Ephemeral | VCPUs | > --------------+-------+------+-----------+-------+ > lram.tiny | 11444 | 10 | 0 | 4 | > lram.small | 15258 | 20 | 0 | 8 | > xram.tiny | 22888 | 10 | 0 | 4 | > xram.small | 38750 | 20 | 0 | 8 | > xram.medium | 77500 | 40 | 0 | 16 | > --------------+-------+------+-----------+-------+ > > The aarch64 kit will allow: tiny,small,medium,lram.tiny,lram.small > The ppc64le kit will allow: all that you see above > > What I'd like from you all is comments about the {l,x}ram sizing. We > have enough capacity to host quite a few of these VMs. Since this is > easy to change and we haven't opened this up to users yet, I'll continue > working on the provisioning side with this scheme in mind. > > Cheers! > > -- > Brian >
Well, I don't see why we should go "insane" with the xram.* flavors. Actually in CI we only serve bare-metal nodes (as while it was mentioned multiple times that there is CI cloud, CI users aren't able -yet- to consume those instances, but that's another story) and forr bare-metal, depending on which nodes/chassis they get back, it's either 16Gb or 32Gb. so my point is that we shouldn't go higher than that, at least for the beginning.
We can remove the medium for now, but there's nothing constraining us to 32G hardware either for the moment (besides what's currently deployed).
I don't know when (for example) RDO will be able to test a deployment in CI, but for sure they'll probably have other needs than vcpus/memory, as they'll have a need for storage (and bigger than 40Gb ?)
Disks are another story, we could almost double the disk on the lram and xram flavors and still be ok capacity-wise I think, but we'll need to gather usage patterns down the line.
On 26/07/17 15:38, Brian Stinson wrote:
On Jul 26 15:04, Fabian Arrotin wrote:
On 25/07/17 17:45, Brian Stinson wrote:
On Jul 14 16:25, Fabian Arrotin wrote:
<snip> > > > A couple of us spoke about this the other day and decided that we would > take the following approach to sizing VMs on altarch hardware: > > Our Openstack instance, CICO Cloud, has the following VM sizes > available: > > Name | RAM | Disk | Ephemeral | VCPUs | > --------+------+------+-----------+-------+ > tiny | 1940 | 10 | 0 | 1 | > small | 3875 | 20 | 0 | 2 | > medium | 7750 | 40 | 0 | 4 | > --------+------+------+-----------+-------+ > > We will duplicate the same sizes for Libvirt VMs on altarch hardware, > but in order to take advantage of the incredible memory density on > these machines, we'll be adding a few flavors for libvirt nodes -only-: > > Name | RAM | Disk | Ephemeral | VCPUs | > --------------+-------+------+-----------+-------+ > lram.tiny | 11444 | 10 | 0 | 4 | > lram.small | 15258 | 20 | 0 | 8 | > xram.tiny | 22888 | 10 | 0 | 4 | > xram.small | 38750 | 20 | 0 | 8 | > xram.medium | 77500 | 40 | 0 | 16 | > --------------+-------+------+-----------+-------+ > > The aarch64 kit will allow: tiny,small,medium,lram.tiny,lram.small > The ppc64le kit will allow: all that you see above > > What I'd like from you all is comments about the {l,x}ram sizing. We > have enough capacity to host quite a few of these VMs. Since this is > easy to change and we haven't opened this up to users yet, I'll continue > working on the provisioning side with this scheme in mind. > > Cheers! > > -- > Brian >
Well, I don't see why we should go "insane" with the xram.* flavors. Actually in CI we only serve bare-metal nodes (as while it was mentioned multiple times that there is CI cloud, CI users aren't able -yet- to consume those instances, but that's another story) and forr bare-metal, depending on which nodes/chassis they get back, it's either 16Gb or 32Gb. so my point is that we shouldn't go higher than that, at least for the beginning.
We can remove the medium for now, but there's nothing constraining us to 32G hardware either for the moment (besides what's currently deployed).
I don't know when (for example) RDO will be able to test a deployment in CI, but for sure they'll probably have other needs than vcpus/memory, as they'll have a need for storage (and bigger than 40Gb ?)
Disks are another story, we could almost double the disk on the lram and xram flavors and still be ok capacity-wise I think, but we'll need to gather usage patterns down the line.
sounds good, if we have the capacity - might as well use it.