Hi,
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS.
Can we have some free space left in the VG? Is there some other way to get this to happen?
Thanks, Dusty
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 22/01/16 05:18, Dusty Mabe wrote:
Hi,
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS.
Can we have some free space left in the VG? Is there some other way to get this to happen?
Thanks, Dusty
Initially, the kickstart[s] we use to deploy CentOS {5,6,7} were using the whole disk as one PV, one VG, but minimal space for the LVs. Then people complaint about that, because their tests weren't doing the resize2fs/xfs_grow operations, so we decided to just add the - --grow option :
part /boot --fstype="ext4" --ondisk=mpatha --size=500 part pv.14 --fstype="lvmpv" --ondisk=mpatha --size=10000 --grow volgroup vg_{{ inventory_hostname_short }} --pesize=4096 pv.14 logvol /home --fstype="xfs" --size=2412 --name=home --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=100000 logvol / --fstype="xfs" --size=8200 --name=root --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=1000000 logvol swap --fstype="swap" --size=2136 --name=swap --vgname=vg_{{ inventory_hostname_short }}
Happy to revisit that if needed. One option would be to have Duffy do the resize operation[s] before giving a node by default, and not touch the layout/fs if called with something like "&resizefs=no" Problem is that in such case, the "connect to node, analyze, resizefs" operations would add time to the api request/answer, so not sure if that's the way to go
Now for your raw block device storage, unfortunately there is actually no option, but still something we can probably do through iscsi. What would be the requirements for your tests ?
- -- Fabian Arrotin The CentOS Project | http://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
On Fri, Jan 22, 2016 at 07:47:17AM +0100, Fabian Arrotin wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 22/01/16 05:18, Dusty Mabe wrote:
Hi,
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS.
Can we have some free space left in the VG? Is there some other way to get this to happen?
Thanks, Dusty
Initially, the kickstart[s] we use to deploy CentOS {5,6,7} were using the whole disk as one PV, one VG, but minimal space for the LVs. Then people complaint about that, because their tests weren't doing the resize2fs/xfs_grow operations, so we decided to just add the
- --grow option :
part /boot --fstype="ext4" --ondisk=mpatha --size=500 part pv.14 --fstype="lvmpv" --ondisk=mpatha --size=10000 --grow volgroup vg_{{ inventory_hostname_short }} --pesize=4096 pv.14 logvol /home --fstype="xfs" --size=2412 --name=home --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=100000 logvol / --fstype="xfs" --size=8200 --name=root --vgname=vg_{{ inventory_hostname_short }} --grow --maxsize=1000000 logvol swap --fstype="swap" --size=2136 --name=swap --vgname=vg_{{ inventory_hostname_short }}
Happy to revisit that if needed. One option would be to have Duffy do the resize operation[s] before giving a node by default, and not touch the layout/fs if called with something like "&resizefs=no" Problem is that in such case, the "connect to node, analyze, resizefs" operations would add time to the api request/answer, so not sure if that's the way to go
Now for your raw block device storage, unfortunately there is actually no option, but still something we can probably do through iscsi. What would be the requirements for your tests ?
We would like to have raw block devices for Gluster testing too. Gluster can do snapshots based on lvm-thinp, and we recommend to use a dedicated Volume Group for each brick. Our tests can use files over /dev/loop* devices, but that is not how things should be used for real world setups.
Not sure about the iscsi suggestion, a network filesystem over network block devices is not very common either ;-)
Thanks, Niels
On 22/01/16 10:53, Niels de Vos wrote:
We would like to have raw block devices for Gluster testing too. Gluster can do snapshots based on lvm-thinp, and we recommend to use a dedicated Volume Group for each brick. Our tests can use files over /dev/loop* devices, but that is not how things should be used for real world setups.
The way to resolve that would be to request the instances from Duffy, and then reprovision as needed. that way you own the timelag and process to provision the box ( we can help, ofcourse ). Rather than needing a systemside / system-wide change in the base layer.
Not sure about the iscsi suggestion, a network filesystem over network block devices is not very common either ;-)
ofcourse, that depends on what you are testing. If perf becomes a thing, then you certainly dont want to go down that route.
regards,
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 22/01/16 12:14, Karanbir Singh wrote:
On 22/01/16 10:53, Niels de Vos wrote:
We would like to have raw block devices for Gluster testing too. Gluster can do snapshots based on lvm-thinp, and we recommend to use a dedicated Volume Group for each brick. Our tests can use files over /dev/loop* devices, but that is not how things should be used for real world setups.
The way to resolve that would be to request the instances from Duffy, and then reprovision as needed. that way you own the timelag and process to provision the box ( we can help, ofcourse ). Rather than needing a systemside / system-wide change in the base layer.
That's also a possibility, but not sure that all projects willing to have more disk space would like to reinstall each node given by Duffy either
Not sure about the iscsi suggestion, a network filesystem over network block devices is not very common either ;-)
ofcourse, that depends on what you are testing. If perf becomes a thing, then you certainly dont want to go down that route.
Yes, and it's also a CI/Test environment, so while it makes sense to try to be as close as possible as what would be done in ideal/real life scenario, it's not always possible.
As a summary :
- - we can try to bring iscsi targets into the mix, but : - gbit connected - storage node behind itself having not infinite storage space (there are more spaces with the local SSD disks from the provisioned nodes) - - we can change the kickstart files to *not* --grow, and let every project create LVs in the VG as they need/want. As there would be less projects willing that, we can implement in Duffy an extra step (as discussed for the VLAN segregation discussion) to expand/resize by default, and let it in the provisioned state when duffy api is called with an extra parameter ) - - projects can also raw files as block devices on the underlying/existing FS, and setup new VG/LV as needed too (with a performance impact, but still probably faster than using iscsi devices)
- -- Fabian Arrotin The CentOS Project | http://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 22/01/16 11:55, Fabian Arrotin wrote:
On 22/01/16 12:14, Karanbir Singh wrote:
On 22/01/16 10:53, Niels de Vos wrote:
We would like to have raw block devices for Gluster testing too. Gluster can do snapshots based on lvm-thinp, and we recommend to use a dedicated Volume Group for each brick. Our tests can use files over /dev/loop* devices, but that is not how things should be used for real world setups.
The way to resolve that would be to request the instances from Duffy, and then reprovision as needed. that way you own the timelag and process to provision the box ( we can help, ofcourse ). Rather than needing a systemside / system-wide change in the base layer.
That's also a possibility, but not sure that all projects willing to have more disk space would like to reinstall each node given by Duffy either
- From the provider side - we need an interface to make it possible, and we already have all the mechanics for it. So should be fairly simple to expose to the jenkins slaves.
Yes, and it's also a CI/Test environment, so while it makes sense to try to be as close as possible as what would be done in ideal/real life scenario, it's not always possible.
no!
the main value prop from ci.centos.org is VERY much to focus on user space and real world testing, hence the baremetal and no pre-existing setups on the machines.
regards
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc
Hi Dusty,
On Fri, Jan 22, 2016 at 5:18 AM, Dusty Mabe dusty@dustymabe.com wrote:
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS.
Can we have some free space left in the VG? Is there some other way to get this to happen?
If performance does not impact you tests, you can setup VGs on PVs created from loopback devices (on empty files).
Best regards.
-- Athmane
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 22/01/16 10:06, Athmane Madjoudj wrote:
Hi Dusty,
On Fri, Jan 22, 2016 at 5:18 AM, Dusty Mabe <dusty@dustymabe.com mailto:dusty@dustymabe.com> wrote:
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS.
Can we have some free space left in the VG? Is there some other way to get this to happen?
If performance does not impact you tests, you can setup VGs on PVs created from loopback devices (on empty files).
The other option is to setup an iscsi like store, and just export a block device to the worker node, and set that up on the client side as needed. Again, it wont be the fastest device in the world, since it will have a 1gbps network cap ( the way things are setup now ).
regards,
- -- Karanbir Singh, Project Lead, The CentOS Project +44-207-0999389 | http://www.centos.org/ | twitter.com/CentOS GnuPG Key : http://www.karan.org/publickey.asc
On 01/22/2016 05:06 AM, Athmane Madjoudj wrote:
Hi Dusty,
On Fri, Jan 22, 2016 at 5:18 AM, Dusty Mabe <dusty@dustymabe.com mailto:dusty@dustymabe.com> wrote:
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS. Can we have some free space left in the VG? Is there some other way to get this to happen?
If performance does not impact you tests, you can setup VGs on PVs created from loopback devices (on empty files).
Yeah, that is what we are doing now. Docker creates it's storage on loopback devices by default.
Dusty
On 22/01/16 15:18, Dusty Mabe wrote:
On 01/22/2016 05:06 AM, Athmane Madjoudj wrote:
Hi Dusty,
On Fri, Jan 22, 2016 at 5:18 AM, Dusty Mabe <dusty@dustymabe.com mailto:dusty@dustymabe.com> wrote:
Is it possible to get raw block device storage on nodes in the CI? Right now from what I can tell the nodes come up with LVM set up and no space free in the volume group. There is a root/home/swap LV and we can't really reclaim any space because xfs doesn't support shrinking the FS. Can we have some free space left in the VG? Is there some other way to get this to happen?
If performance does not impact you tests, you can setup VGs on PVs created from loopback devices (on empty files).
Yeah, that is what we are doing now. Docker creates it's storage on loopback devices by default.
if the larger problem space here is to recreate a cloud like setup ( wherein the image deploys and additional storage is attached in as block dev's ) - then we do have an openstack cloud coming up in ci.centos.org ( ETA ~ 6 to 8 weeks ).
Regards
On 22/01/16 04:18, Dusty Mabe wrote:
Can we have some free space left in the VG? Is there some other way to get this to happen?
Space was harder, and needs more planning - Patrick and I just hacked up a small example script that allows you to build, on demand a file backed loop device that can be (ab)used to get you the same result.
grab https://github.com/kbsingh/centos-ci-scripts/blob/master/create_looped_disk...., and call it like:
dsk=$(bash create_looped_disk.sh 10)
will give you a 10gb disk, with $dsk pointing to /dev/<something> that is the device name you can now consume.
since the machines are throw away, we didnt bother with reclaiming space etc - and we left it to the user to not ask for size > available space.
the backing loop files are created in /var/tmp/
performance seems reasonable.
does this help you get over the immediate problem ?