hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/08/15 10:22, Karanbir Singh wrote:
hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
well, a robust vpn solution in place, as all koji builders need access to the (nfs shared) central repo. But instead of just having gbit link at the remote machines end, we should start then by having gbit link at the origin, which will not be the case.
- --
Fabian Arrotin The CentOS Project | http://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
On 10/08/15 09:43, Fabian Arrotin wrote:
On 10/08/15 10:22, Karanbir Singh wrote:
hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
well, a robust vpn solution in place, as all koji builders need access to the (nfs shared) central repo. But instead of just having gbit link at the remote machines end, we should start then by having gbit link at the origin, which will not be the case.
Are you saying that the blades in the same blade center dont have a gbit link to each other ? isnt that where / how the present koji origin is setup ?
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/08/15 10:54, Karanbir Singh wrote:
On 10/08/15 09:43, Fabian Arrotin wrote:
On 10/08/15 10:22, Karanbir Singh wrote:
hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
well, a robust vpn solution in place, as all koji builders need access to the (nfs shared) central repo. But instead of just having gbit link at the remote machines end, we should start then by having gbit link at the origin, which will not be the case.
Are you saying that the blades in the same blade center dont have a gbit link to each other ? isnt that where / how the present koji origin is setup ?
No, but you were talking about geo-distributed, so surely not in the same DC, and so the remark for the need for a gbit internet connection. The link we have is actually shared with multiple projects, so no QoS on that and I don't know how the nfs-though-vpn IO operations will handle that. I guess remote builders don't need really gbit either .. people experienced with Koji should now answer :-)
- --
Fabian Arrotin The CentOS Project | http://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
On Aug 10 10:59, Fabian Arrotin wrote:
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
On 10/08/15 10:54, Karanbir Singh wrote:
On 10/08/15 09:43, Fabian Arrotin wrote:
On 10/08/15 10:22, Karanbir Singh wrote:
hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
well, a robust vpn solution in place, as all koji builders need access to the (nfs shared) central repo. But instead of just having gbit link at the remote machines end, we should start then by having gbit link at the origin, which will not be the case.
Are you saying that the blades in the same blade center dont have a gbit link to each other ? isnt that where / how the present koji origin is setup ?
No, but you were talking about geo-distributed, so surely not in the same DC, and so the remark for the need for a gbit internet connection. The link we have is actually shared with multiple projects, so no QoS on that and I don't know how the nfs-though-vpn IO operations will handle that. I guess remote builders don't need really gbit either .. people experienced with Koji should now answer :-)
Fabian Arrotin The CentOS Project | http://www.centos.org gpg key: 56BEC54E | twitter: @arrfab -----BEGIN PGP SIGNATURE----- Version: GnuPG v2.0.22 (GNU/Linux)
iEYEARECAAYFAlXIZ/EACgkQnVkHo1a+xU5uyACaAjiG9ry78cuoeOXi+2LxyNl0 9rMAoIgQJLtiwKkNxVPQZGueCx1nLiFt =RBKY -----END PGP SIGNATURE-----
Network/physical security (definitely a VPN) should be high on our list of things to think about since the builders have access to /mnt/koji. Fast networks are nice to have too :)
--Brian
On 10/08/15 14:32, Brian Stinson wrote:
Network/physical security (definitely a VPN) should be high on our list of things to think about since the builders have access to /mnt/koji. Fast networks are nice to have too :)
ok, so we need to nfs share /mnt/koji amongst all the builders, regardless of arch or target; apart from this - are there any other challenges ?
how did fedora run the shadow builders back in the day of secondary arch's - is that still a thing ?
On 10/08/2015 21:18, Karanbir Singh wrote:
ok, so we need to nfs share /mnt/koji amongst all the builders, regardless of arch or target; apart from this - are there any other challenges ? how did fedora run the shadow builders back in the day of secondary arch's - is that still a thing ?
PPC64(le), s390(x) and aarch64 are all Fedora secondary architectures. Each one has its own Koji environment, seperate from the primary env in the Fedora infrastructure. Koji-shadow works by pulling build information down via the Koji hub web server, not using a shared NFS mount. As each shadow koji manages its own build yum repos, access to the primary koji's NFS mount isn't needed. My recollection is that the original Fedora ARM Koji setup (when armv7hl was a secondary arch) was hosted at Seneca in Toronto.
So, if you want to use the Fedora model, all primary arch builders need access to a common NFS mount. Any secondary arches don't.
On Mon, Aug 10, 2015 at 4:49 PM, Howard Johnson merlin@mwob.org.uk wrote:
On 10/08/2015 21:18, Karanbir Singh wrote:
ok, so we need to nfs share /mnt/koji amongst all the builders, regardless of arch or target; apart from this - are there any other challenges ? how did fedora run the shadow builders back in the day of secondary arch's - is that still a thing ?
PPC64(le), s390(x) and aarch64 are all Fedora secondary architectures. Each one has its own Koji environment, seperate from the primary env in the Fedora infrastructure. Koji-shadow works by pulling build information down via the Koji hub web server, not using a shared NFS mount. As each shadow koji manages its own build yum repos, access to the primary koji's NFS mount isn't needed. My recollection is that the original Fedora ARM Koji setup (when armv7hl was a secondary arch) was hosted at Seneca in Toronto.
So, if you want to use the Fedora model, all primary arch builders need access to a common NFS mount. Any secondary arches don't.
Please tell me it's at least an NFSv4 share and mount, with Kerberized authentication? I've had some difficulty explaing to some of my colleagues for the last 20 years that NFS shares present some real security issues without tight user and environmental control. If I find one more set of Subversion or passphrase free SSH or LDAP credentials in a plain-text, shared home directory I'm going to. well, get paid for cleaning up the mess. But it wastes time cleaning up security as an afterthought.
On 11/08/2015 02:28, Nico Kadel-Garcia wrote:
Please tell me it's at least an NFSv4 share and mount, with Kerberized authentication? I've had some difficulty explaing to some of my colleagues for the last 20 years that NFS shares present some real security issues without tight user and environmental control. If I find one more set of Subversion or passphrase free SSH or LDAP credentials in a plain-text, shared home directory I'm going to. well, get paid for cleaning up the mess. But it wastes time cleaning up security as an afterthought.
I've no idea. I'd imagine Fedora's infra counts as "tight user and environmental control", though. I'm also not sure it's really any of our concern.
On 10/08/15 21:49, Howard Johnson wrote:
On 10/08/2015 21:18, Karanbir Singh wrote:
ok, so we need to nfs share /mnt/koji amongst all the builders, regardless of arch or target; apart from this - are there any other challenges ? how did fedora run the shadow builders back in the day of secondary arch's - is that still a thing ?
PPC64(le), s390(x) and aarch64 are all Fedora secondary architectures. Each one has its own Koji environment, seperate from the primary env in the Fedora infrastructure. Koji-shadow works by pulling build information down via the Koji hub web server, not using a shared NFS mount. As each shadow koji manages its own build yum repos, access to the primary koji's NFS mount isn't needed. My recollection is that the original Fedora ARM Koji setup (when armv7hl was a secondary arch) was hosted at Seneca in Toronto.
So, if you want to use the Fedora model, all primary arch builders need access to a common NFS mount. Any secondary arches don't.
most if not all the management layers and orchestration should be automateable, then its only the user auth and target setup we need to worry about. if that is the case, then is this a better format to run for distributed setup's ( I realise we end up with many instances of koji instead of one.. )
- KB
On 08/10/2015 03:54 AM, Karanbir Singh wrote:
On 10/08/15 09:43, Fabian Arrotin wrote:
On 10/08/15 10:22, Karanbir Singh wrote:
hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
well, a robust vpn solution in place, as all koji builders need access to the (nfs shared) central repo. But instead of just having gbit link at the remote machines end, we should start then by having gbit link at the origin, which will not be the case.
Are you saying that the blades in the same blade center dont have a gbit link to each other ? isnt that where / how the present koji origin is setup ?
Why would we need geo distributed build hosts?
If we wanted geodistributed output (ie, where the builds are actually hosted), that would be one thing .. and maybe even needed if there is enough bandwidth for downloads.
But geodistributed bulders, to me, are self defeating. If you need MORE builders, put them where the repos they build against live and use them there.
What would be the benefit of geodistributed build machines?
On 10/08/15 13:48, Johnny Hughes wrote:
On 08/10/2015 03:54 AM, Karanbir Singh wrote:
On 10/08/15 09:43, Fabian Arrotin wrote:
On 10/08/15 10:22, Karanbir Singh wrote:
hi,
What are the main concerns for hosting some builder hardware geo-distributed from the main koji hub ? Apart from the latency thing, but given that most builders spend all their time in mock root builds and then the rpmbuild, if we can get gbit links to the remote machines, we should be ok right ?
regards,
well, a robust vpn solution in place, as all koji builders need access to the (nfs shared) central repo. But instead of just having gbit link at the remote machines end, we should start then by having gbit link at the origin, which will not be the case.
Are you saying that the blades in the same blade center dont have a gbit link to each other ? isnt that where / how the present koji origin is setup ?
Why would we need geo distributed build hosts?
various architectures will get built and bootstrapped in various places, splitting these up to feed build, ci, test, community access etc just dilutes the capacity massively.
If we wanted geodistributed output (ie, where the builds are actually hosted), that would be one thing .. and maybe even needed if there is enough bandwidth for downloads.
But geodistributed bulders, to me, are self defeating. If you need MORE builders, put them where the repos they build against live and use them there.
i dont think build location and repo location are the same problem space, one is easily solved without needing the other.
What would be the benefit of geodistributed build machines?
We've run geo distributed builders since CentOS-4 days, and havent had any major issues so far :) I am sure we can maximise the impact of smaller more focused machine donations to the project going forward in this way as well.