Hi Folks,
While we cycle through some of the remaining builds I'd like to start a discussion about what the CentOS 8 repo structure might look like. We need to think about what the repos look like on-disk, and how this might impact the mirrors.
Currently the thinking is this:
3 "core" repos:
- BaseOS (contains a small packageset of the base distribution) - AppStream ("where the modules go") - Devel ("-devel packages and other tools")
These descriptions are very much an oversimplification, but it's an ok model to work with.
We plan to compose all of those repositories, and deliver updates in the same stream. The x86_64 tree for the BaseOS repository will look something like this:
x86_64 ├── debug # Note: we will likely snip this out and move debugs to debuginfo.centos.org │ └── tree │ ├── Packages │ └── repodata ├── iso └── os ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── Packages └── repodata
The plan is to re-compose BaseOS and all the "release" media like cloud images/ISOs at the traditional point-release times, and refresh the repodata in between as updates come in.
Currently there are 3 primary architectures: x86_64, ppc64le, and aarch64, and 1 alternative architecture: armhfp. For CentOS 7 we split our primary and alternate architectures into /centos and /altarch on the mirrors to allow mirror admins to choose which trees to mirror. Is this something we'd like to continue?
I welcome any questions and discussion that we might have.
On 19/06/2019 17:08, Brian Stinson wrote:
Hi Folks,
While we cycle through some of the remaining builds I'd like to start a discussion about what the CentOS 8 repo structure might look like. We need to think about what the repos look like on-disk, and how this might impact the mirrors.
Currently the thinking is this:
3 "core" repos:
- BaseOS (contains a small packageset of the base distribution)
- AppStream ("where the modules go")
- Devel ("-devel packages and other tools")
These descriptions are very much an oversimplification, but it's an ok model to work with.
Does that mean having the add-ons like ha/rs going either to BaseOS (for simple packages) or to AppStream (if built as modules") ?
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
The x86_64 tree for the BaseOS repository will look something like this:
x86_64 ├── debug # Note: we will likely snip this out and move debugs to debuginfo.centos.org │ └── tree │ ├── Packages │ └── repodata ├── iso └── os ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── Packages └── repodata
The plan is to re-compose BaseOS and all the "release" media like cloud images/ISOs at the traditional point-release times, and refresh the repodata in between as updates come in.
Currently there are 3 primary architectures: x86_64, ppc64le, and aarch64, and 1 alternative architecture: armhfp. For CentOS 7 we split our primary and alternate architectures into /centos and /altarch on the mirrors to allow mirror admins to choose which trees to mirror. Is this something we'd like to continue?
If ppc64le and aarch64 were "promoted" as "primary arches" (and it's now the case even for 7 in fact, as we consider those, also used for cbs.centos.org SIG builds), I'd say +1 to "move" them back under /centos/ We can still have directories in /altarch/ with simple README file explaining where to find those for 8.
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
This may just be a case of having a second set of metadata.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
regards,
On Wed, Jun 19, 2019, at 11:32, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
Can you clarify this? What "reproducible install" pattern is broken here?
This may just be a case of having a second set of metadata.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
regards,
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
*Attachments:*
- signature.asc
On 19/06/2019 17:45, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 11:32, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates
in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
Can you clarify this? What "reproducible install" pattern is broken here?
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
regards
On Wed, Jun 19, 2019, at 13:07, Karanbir Singh wrote:
On 19/06/2019 17:45, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 11:32, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates
in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
Can you clarify this? What "reproducible install" pattern is broken here?
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
regards
Is there something preventing you from doing that if we ship updates in the same repo as the 0-day release content?
On 19/06/2019 19:22, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 13:07, Karanbir Singh wrote:
On 19/06/2019 17:45, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 11:32, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates
in the same stream.
Just so that people realize : no *updates* repo anymore, so all
combined
: if you install from network $today, what you'll install
$tomorrow will
have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
Can you clarify this? What "reproducible install" pattern is broken
here?
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
regards
Is there something preventing you from doing that if we ship updates in the same repo as the 0-day release content?
yes,
if i yum install <httpd>; with the base only. I'd like to get the same httpd, not the 3 versions removed in the updates that have now landed in the same repo.
again, this maybe just a case of publishing a 2nd set of metadata rather than retain the base rpm set, but we need to retain this functionality.
On Wed, Jun 19, 2019, at 13:25, Karanbir Singh wrote:
On 19/06/2019 19:22, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 13:07, Karanbir Singh wrote:
On 19/06/2019 17:45, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 11:32, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
> > We plan to compose all of those repositories, and deliver updates
in the same stream.
Just so that people realize : no *updates* repo anymore, so all
combined
: if you install from network $today, what you'll install
$tomorrow will
have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
Can you clarify this? What "reproducible install" pattern is broken
here?
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
regards
Is there something preventing you from doing that if we ship updates in the same repo as the 0-day release content?
yes,
if i yum install <httpd>; with the base only. I'd like to get the same httpd, not the 3 versions removed in the updates that have now landed in the same repo.
again, this maybe just a case of publishing a 2nd set of metadata rather than retain the base rpm set, but we need to retain this functionality.
Wouldn't pinning versions be better here if that's what you need? If you took that same kickstart over to a RHEL machine, you'd get the updates over there.
Seems to me like delivering the updates separately goes against our community recommendations anyways (i.e. the first thing we say in irc is "did you run yum/dnf update?").
On 19/06/2019 19:42, Brian Stinson wrote:
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
again, this maybe just a case of publishing a 2nd set of metadata rather than retain the base rpm set, but we need to retain this functionality.
Wouldn't pinning versions be better here if that's what you need? If you took that same kickstart over to a RHEL machine, you'd get the updates over there.
no, you would get a point release ( same as media .. ) if you wanted to. I want to make sure we dont lose that ability in CentOS, so were going to need to do this.
Seems to me like delivering the updates separately goes against our community recommendations anyways (i.e. the first thing we say in irc is "did you run yum/dnf update?").
we dont however, ask people to install arbitary untested trees.
Help me qualify the concern in an extra set of metadata to match point release media ?
regards,
On 24/06/2019 12:42, Karanbir Singh wrote:
On 19/06/2019 19:42, Brian Stinson wrote:
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
again, this maybe just a case of publishing a 2nd set of metadata rather than retain the base rpm set, but we need to retain this functionality.
Wouldn't pinning versions be better here if that's what you need? If you took that same kickstart over to a RHEL machine, you'd get the updates over there.
no, you would get a point release ( same as media .. ) if you wanted to. I want to make sure we dont lose that ability in CentOS, so were going to need to do this.
Would be indeed worth clarifying, as I was under the impression that Brian said the reverse with "all updates would land in BaseOS" and so BaseOS would be a moving target ...
@Brian : can we clarify once for all this ?
On 24/06/2019 13:05, Fabian Arrotin wrote:
On 24/06/2019 12:42, Karanbir Singh wrote:
On 19/06/2019 19:42, Brian Stinson wrote:
I need to be able to run installs against a mirror, weeks and months apart and arrive at the same payload installed exactly.
again, this maybe just a case of publishing a 2nd set of metadata rather than retain the base rpm set, but we need to retain this functionality.
Wouldn't pinning versions be better here if that's what you need? If you took that same kickstart over to a RHEL machine, you'd get the updates over there.
no, you would get a point release ( same as media .. ) if you wanted to. I want to make sure we dont lose that ability in CentOS, so were going to need to do this.
Would be indeed worth clarifying, as I was under the impression that Brian said the reverse with "all updates would land in BaseOS" and so BaseOS would be a moving target ...
@Brian : can we clarify once for all this ?
2 diff things,
a) BaseOS will get all the content from a point release and all updates will land in the same directory
b) we snapshot the repo metadata and publish it in addition to the baseos repo ( what was called the kickstart repo ).
we can do both.
On 24/06/2019 13:09, Karanbir Singh wrote:
On 24/06/2019 13:05, Fabian Arrotin wrote:
On 24/06/2019 12:42, Karanbir Singh wrote:
On 19/06/2019 19:42, Brian Stinson wrote:
> I need to be able to run installs against a mirror, weeks and months > apart and arrive at the same payload installed exactly.
again, this maybe just a case of publishing a 2nd set of metadata rather than retain the base rpm set, but we need to retain this functionality.
Wouldn't pinning versions be better here if that's what you need? If you took that same kickstart over to a RHEL machine, you'd get the updates over there.
no, you would get a point release ( same as media .. ) if you wanted to. I want to make sure we dont lose that ability in CentOS, so were going to need to do this.
Would be indeed worth clarifying, as I was under the impression that Brian said the reverse with "all updates would land in BaseOS" and so BaseOS would be a moving target ...
@Brian : can we clarify once for all this ?
2 diff things,
a) BaseOS will get all the content from a point release and all updates will land in the same directory
b) we snapshot the repo metadata and publish it in addition to the baseos repo ( what was called the kickstart repo ).
we can do both.
Again, this is *how* you see this, but it can differ from how that will be composed, reason why we probably need a kind of formal plan (in the wiki ?) to have $latest version of this thread, instead of trying to read "between the lines" what was said or not, etc .. :-)
On 6/19/19 11:32 AM, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
I would point out .. this is how RHEL works .. there is not an updates repo there.
This may just be a case of having a second set of metadata.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
On Wed, Jun 19, 2019, at 1:13 PM, Johnny Hughes wrote:
On 6/19/19 11:32 AM, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
I would point out .. this is how RHEL works .. there is not an updates repo there.
RHEL has the 'kickstart' repo which is equivalent to the CentOS 'base' repo in that it never changes once released.
V/r, James Cassell
On Wed, Jun 19, 2019, at 13:51, James Cassell wrote:
On Wed, Jun 19, 2019, at 1:13 PM, Johnny Hughes wrote:
On 6/19/19 11:32 AM, Karanbir Singh wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
I would point out .. this is how RHEL works .. there is not an updates repo there.
RHEL has the 'kickstart' repo which is equivalent to the CentOS 'base' repo in that it never changes once released.
@kbsingh: if we did something like this, would it solve your use-case?
On Wed, Jun 19, 2019 at 1:06 PM Brian Stinson brian@bstinson.com wrote:
RHEL has the 'kickstart' repo which is equivalent to the CentOS 'base' repo in that it never changes once released.
@kbsingh: if we did something like this, would it solve your use-case?
I have the same concern as KB, and I think this would be a fine solution. Just looking for some way to have a reproducible install from the repo.
-Jeff
On Wed, Jun 19, 2019, at 14:11, Jeff Sheltren wrote:
On Wed, Jun 19, 2019 at 1:06 PM Brian Stinson brian@bstinson.com wrote:
RHEL has the 'kickstart' repo which is equivalent to the CentOS 'base' repo in that it never changes once released.
@kbsingh: if we did something like this, would it solve your use-case?
I have the same concern as KB, and I think this would be a fine solution. Just looking for some way to have a reproducible install from the repo.
-Jeff _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Operationally, this means we'd compose BaseOS by itself at a point-release time, hard-link that tree over to the 'kickstart' location on the masters, and then re-spin for 0-day updates.
On Wed, Jun 19, 2019 at 1:14 PM Brian Stinson brian@bstinson.com wrote:
Operationally, this means we'd compose BaseOS by itself at a point-release time, hard-link that tree over to the 'kickstart' location on the masters, and then re-spin for 0-day updates.
That sounds good to me. I suggest using a date-stamped directory name for "kickstart" to make it obvious.
-Jeff
On 19/06/2019 20:14, Brian Stinson wrote:
Operationally, this means we'd compose BaseOS by itself at a point-release time, hard-link that tree over to the 'kickstart' location on the masters, and then re-spin for 0-day updates.
I was thinking we may just need a createrepo run somewhere else while the baseos does not have 0+day updates etc. I dont know if we need to hardlink content around etc.
Thinking one step out, would this metadata be the same as what we ship on the ISO media ?
regards
On Wed, Jun 19, 2019, at 14:55, Karanbir Singh wrote:
On 19/06/2019 20:14, Brian Stinson wrote:
Operationally, this means we'd compose BaseOS by itself at a point-release time, hard-link that tree over to the 'kickstart' location on the masters, and then re-spin for 0-day updates.
I was thinking we may just need a createrepo run somewhere else while the baseos does not have 0+day updates etc. I dont know if we need to hardlink content around etc.
Thinking one step out, would this metadata be the same as what we ship on the ISO media ?
regards
I'm trying to get us away from one-off processes done by hand. It's easier to copy the structure as-is after a GA compose, and run an update compose with all of updates directly afterward.
We could (and probably should) generate the install media from the GA compose.
On 19/06/2019 20:59, Brian Stinson wrote:
I'm trying to get us away from one-off processes done by hand. It's easier to copy the structure as-is after a GA compose, and run an update compose with all of updates directly afterward.
We could (and probably should) generate the install media from the GA compose.
in which case, we just need a copy of the GA metadata on the mirror's somewhere
On 19/06/2019 20:06, Brian Stinson wrote:
RHEL has the 'kickstart' repo which is equivalent to the CentOS 'base' repo in that it never changes once released.
@kbsingh: if we did something like this, would it solve your use-case?
yes, i just want to make sure the tree's are available, match point release and usable at any point in time from there on.
we can call this repo 'os'.
Regards,
On Wed, 19 Jun 2019 at 12:32, Karanbir Singh kbsingh@centos.org wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in
the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
I think there is some confusion as what Brian is describing is what RHEL has been doing since EL6.
This isn't any different from what RHEL does now. You have a primary iso image you instal from but if you point a kickstart to a baseurl=https://cdn.<foo> you get whatever was in the compose of the day (with all the previous packages there also but most installs will just pull the latest). If you want a reproducible RHEL install you need to only use the ISO or some similar frozen toolkit (Satellite, specific local branches, etc) but otherwise you can get different installs each day. Whether this is a good design or not is a different question... but it is one which has been in place for nearly 10 years in the RHEL upstream. [Currently the Fedora up-upstream does keep /updates/ but that is done by other production tools.]
This may just be a case of having a second set of metadata.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
I believe what Brian was saying is that this would only be retained for the life of a point release, but I may be misunderstanding.
regards,
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On 19/06/2019 18:47, Stephen John Smoogen wrote:
On Wed, 19 Jun 2019 at 12:32, Karanbir Singh <kbsingh@centos.org mailto:kbsingh@centos.org> wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote: >> >> We plan to compose all of those repositories, and deliver updates in the same stream. > > Just so that people realize : no *updates* repo anymore, so all combined > : if you install from network $today, what you'll install $tomorrow will > have all rolled-in directly > that's not going to work - we need to retain the ability to deliver reproducible installs.
I think there is some confusion as what Brian is describing is what RHEL has been doing since EL6.
This isn't any different from what RHEL does now. You have a primary iso image you instal from but if you point a kickstart to a baseurl=https://cdn.<foo> you get whatever was in the compose of the day (with all the previous packages there also but most installs will just pull the latest). If you want a reproducible RHEL install you need to only use the ISO or some similar frozen toolkit (Satellite, specific local branches, etc) but otherwise you can get different installs each day. Whether this is a good design or not is a different question... but it is one which has been in place for nearly 10 years in the RHEL upstream. [Currently the Fedora up-upstream does keep /updates/ but that is done by other production tools.]
unsure how/what you are talking about here - are you saying that we are going to adopt the RHEL delivery model ? if so, how are z stream going to work ? are we doing mappings for those as well now ?
additionally, the subs manager allows me to lock out and away specific rhel content, on a rhel machine - are we adopting that as well ?
This may just be a case of having a second set of metadata. also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
I believe what Brian was saying is that this would only be retained for the life of a point release, but I may be misunderstanding.
That works, can we get confirmation here ?
On Wed, Jun 19, 2019, at 13:10, Karanbir Singh wrote:
On 19/06/2019 18:47, Stephen John Smoogen wrote:
On Wed, 19 Jun 2019 at 12:32, Karanbir Singh <kbsingh@centos.org mailto:kbsingh@centos.org> wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates
in the same stream.
Just so that people realize : no *updates* repo anymore, so all
combined
: if you install from network $today, what you'll install
$tomorrow will
have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
I think there is some confusion as what Brian is describing is what RHEL has been doing since EL6.
This isn't any different from what RHEL does now. You have a primary iso image you instal from but if you point a kickstart to a baseurl=https://cdn.<foo> you get whatever was in the compose of the day (with all the previous packages there also but most installs will just pull the latest). If you want a reproducible RHEL install you need to only use the ISO or some similar frozen toolkit (Satellite, specific local branches, etc) but otherwise you can get different installs each day. Whether this is a good design or not is a different question... but it is one which has been in place for nearly 10 years in the RHEL upstream. [Currently the Fedora up-upstream does keep /updates/ but that is done by other production tools.]
unsure how/what you are talking about here - are you saying that we are going to adopt the RHEL delivery model ? if so, how are z stream going to work ? are we doing mappings for those as well now ?
Very much not in favor of doing z-stream updates. But EUS is a separate RHEL entitlement, does the existence of EUS somehow change the discussion here?
additionally, the subs manager allows me to lock out and away specific rhel content, on a rhel machine - are we adopting that as well ?
This may just be a case of having a second set of metadata.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
I believe what Brian was saying is that this would only be retained for the life of a point release, but I may be misunderstanding.
That works, can we get confirmation here ?
I deliberately left that unspecified to generate discussion here. The tradeoff is between keeping large amounts of history, and conserving space on the mirrors. If we want to prune at point-release time we can.
On 19/06/2019 19:27, Brian Stinson wrote:
On Wed, Jun 19, 2019, at 13:10, Karanbir Singh wrote:
On 19/06/2019 18:47, Stephen John Smoogen wrote:
This may just be a case of having a second set of metadata.
also, what life term are we going to have for the single repo
structure
? are we hoping to retain all content for the life of the release ?
I believe what Brian was saying is that this would only be retained for the life of a point release, but I may be misunderstanding.
That works, can we get confirmation here ?
I deliberately left that unspecified to generate discussion here. The tradeoff is between keeping large amounts of history, and conserving space on the mirrors. If we want to prune at point-release time we can.
Personally I'd prefer a model closer to rhel where all packages are available within a channel/repository for the life of the product, but I fully appreciate the tradeoff on mirror size etc. The really important thing for me is that we don't break the behaviour of CentOS relative to rhel, so it is really important that older content is always available / installable from somewhere - currently that's the vault repo and although not ideal it provides a workaround.
On 19/06/2019 19:27, Brian Stinson wrote:
I deliberately left that unspecified to generate discussion here. The tradeoff is between keeping large amounts of history, and conserving space on the mirrors. If we want to prune at point-release time we can.
There are a couple of fundamental challenges here, the way we distribute content is a model that was built for an internet and scope which existed in 2001.
There is enough infra and wide enough footprint that we should consider moving to a better content delivery model and service, rather than static files, promoted around the world entirely removed from frequency of use or impact. This is going to be a longer conversation, but something we should scope though ( Even if we dont do it. )
retain a small footprint in msync means a more consistent use-case for everyone who needs that content. its a model thats worked well for us, and I'd say worth sticking with - if we were to prune say 2 times a year, past point releases, that would likely work fine.
The one problem we -do- need to think through at this point is when folks get stuck across trim lines. eg. kernel running that for a mod build, now needs a src which got trimmed; there is never a good solution for this, other than promoting every vault repo as enabled for all content to be visible, always.
regards,
On Wed, 19 Jun 2019 at 14:10, Karanbir Singh kbsingh@centos.org wrote:
On 19/06/2019 18:47, Stephen John Smoogen wrote:
On Wed, 19 Jun 2019 at 12:32, Karanbir Singh <kbsingh@centos.org mailto:kbsingh@centos.org> wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote: >> >> We plan to compose all of those repositories, and deliver updates in the same stream. > > Just so that people realize : no *updates* repo anymore, so all combined > : if you install from network $today, what you'll install $tomorrow will > have all rolled-in directly > that's not going to work - we need to retain the ability to deliver reproducible installs.
I think there is some confusion as what Brian is describing is what RHEL has been doing since EL6.
This isn't any different from what RHEL does now. You have a primary iso image you instal from but if you point a kickstart to a baseurl=https://cdn.<foo> you get whatever was in the compose of the day (with all the previous packages there also but most installs will just pull the latest). If you want a reproducible RHEL install you need to only use the ISO or some similar frozen toolkit (Satellite, specific local branches, etc) but otherwise you can get different installs each day. Whether this is a good design or not is a different question... but it is one which has been in place for nearly 10 years in the RHEL upstream. [Currently the Fedora up-upstream does keep /updates/ but that is done by other production tools.]
unsure how/what you are talking about here - are you saying that we are going to adopt the RHEL delivery model ? if so, how are z stream going to work ? are we doing mappings for those as well now ?
I am saying that with modules and unless you want to stand up a bunch of other things.. you are probably going to have to. However this isn't much different from how the current CentOS-7 is.. there are no Z streams for CentOS-7.. there is only the latest with dot releases being sort of rebased snapshots that are built off of upstream src.rpms.. So if people have been saying that there is no CentOS-7.7 there is just CentOS now.. then this would be taking that one step further.
By the way, I am only +0 on this at the moment. I can see why it looks attractive and makes certain work easier, but I am also aware it will makes deployment different/harder. But hey ~5 years ago we had a similar conversation on CentOS 7 minor version numbers with me arguing against going with 7.<YYYYMM> and wanting to keep 7.<minor>.
On Wed, Jun 19, 2019 at 12:32 PM Karanbir Singh kbsingh@centos.org wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
This may just be a case of having a second set of metadata.
A "parent" directory with secondary metadata, including all sub repositories, might work if we want it. But I think it's going to cause mismatches and confusion between RHEL and CentOS, and we should just use the upstream layout. For example, one issue is that the upstream channels overlap: the "codebuilder", "highavailability" and "resilientstorage" channels have some overlapping SRPMs and RPMs. Duplicate content in multiple channels is begging for trouble. The activation of modules would seem to compound the problem. Upstream filesystems may support filesystems with hardlinks among identical RPMs. Installation DVD images will not.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
Good question. I think it's going to be safer to simply perserve the upstream layout and enable the additional channels, such as "codebuilder" and "highavailability" and "resilientstorage", by default. The "ansible" channels may require more thought.
regards,
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On Fri, Jun 21, 2019 at 7:46 AM Nico Kadel-Garcia nkadel@gmail.com wrote:
On Wed, Jun 19, 2019 at 12:32 PM Karanbir Singh kbsingh@centos.org wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
This may just be a case of having a second set of metadata.
A "parent" directory with secondary metadata, including all sub repositories, might work if we want it. But I think it's going to cause mismatches and confusion between RHEL and CentOS, and we should just use the upstream layout. For example, one issue is that the upstream channels overlap: the "codebuilder", "highavailability" and "resilientstorage" channels have some overlapping SRPMs and RPMs. Duplicate content in multiple channels is begging for trouble. The activation of modules would seem to compound the problem. Upstream filesystems may support filesystems with hardlinks among identical RPMs. Installation DVD images will not.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
Good question. I think it's going to be safer to simply perserve the upstream layout and enable the additional channels, such as "codebuilder" and "highavailability" and "resilientstorage", by default. The "ansible" channels may require more thought.
I'd rather have this bonkers layout *not* preserved in CentOS. Putting it all together in one repo (as was done for CentOS 6 and CentOS 7) has made things tremendously easier. The reason they're broken apart in RHEL is to allow charging people money for various aspects of RHEL. Or in the case of the "codebuilder" repo, dumb marketing purposes.
Simplicity is key here, and having the unified repo makes it *much* easier to use CentOS and build software from it.
On Fri, 21 Jun 2019 at 08:25, Neal Gompa ngompa13@gmail.com wrote:
On Fri, Jun 21, 2019 at 7:46 AM Nico Kadel-Garcia nkadel@gmail.com wrote:
On Wed, Jun 19, 2019 at 12:32 PM Karanbir Singh kbsingh@centos.org
wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates
in the same stream.
Just so that people realize : no *updates* repo anymore, so all
combined
: if you install from network $today, what you'll install $tomorrow
will
have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
This may just be a case of having a second set of metadata.
A "parent" directory with secondary metadata, including all sub repositories, might work if we want it. But I think it's going to cause mismatches and confusion between RHEL and CentOS, and we should just use the upstream layout. For example, one issue is that the upstream channels overlap: the "codebuilder", "highavailability" and "resilientstorage" channels have some overlapping SRPMs and RPMs. Duplicate content in multiple channels is begging for trouble. The activation of modules would seem to compound the problem. Upstream filesystems may support filesystems with hardlinks among identical RPMs. Installation DVD images will not.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
Good question. I think it's going to be safer to simply perserve the upstream layout and enable the additional channels, such as "codebuilder" and "highavailability" and "resilientstorage", by default. The "ansible" channels may require more thought.
I'd rather have this bonkers layout *not* preserved in CentOS. Putting it all together in one repo (as was done for CentOS 6 and CentOS 7) has made things tremendously easier. The reason they're broken apart in RHEL is to allow charging people money for various aspects of RHEL. Or in the case of the "codebuilder" repo, dumb marketing purposes.
Simplicity is key here, and having the unified repo makes it *much* easier to use CentOS and build software from it.
Due to modularity and compose times etc.. it would make more sense to have at most something like
Non-modular/ Modular/ Updates/ |------------> Non-modular/ |------------> Modular/
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
On Fri, Jun 21, 2019, at 07:25, Neal Gompa wrote:
On Fri, Jun 21, 2019 at 7:46 AM Nico Kadel-Garcia nkadel@gmail.com wrote:
On Wed, Jun 19, 2019 at 12:32 PM Karanbir Singh kbsingh@centos.org wrote:
On 19/06/2019 17:18, Fabian Arrotin wrote:
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
that's not going to work - we need to retain the ability to deliver reproducible installs.
This may just be a case of having a second set of metadata.
A "parent" directory with secondary metadata, including all sub repositories, might work if we want it. But I think it's going to cause mismatches and confusion between RHEL and CentOS, and we should just use the upstream layout. For example, one issue is that the upstream channels overlap: the "codebuilder", "highavailability" and "resilientstorage" channels have some overlapping SRPMs and RPMs. Duplicate content in multiple channels is begging for trouble. The activation of modules would seem to compound the problem. Upstream filesystems may support filesystems with hardlinks among identical RPMs. Installation DVD images will not.
also, what life term are we going to have for the single repo structure ? are we hoping to retain all content for the life of the release ?
Good question. I think it's going to be safer to simply perserve the upstream layout and enable the additional channels, such as "codebuilder" and "highavailability" and "resilientstorage", by default. The "ansible" channels may require more thought.
I'd rather have this bonkers layout *not* preserved in CentOS. Putting it all together in one repo (as was done for CentOS 6 and CentOS 7) has made things tremendously easier. The reason they're broken apart in RHEL is to allow charging people money for various aspects of RHEL. Or in the case of the "codebuilder" repo, dumb marketing purposes.
Simplicity is key here, and having the unified repo makes it *much* easier to use CentOS and build software from it.
-- 真実はいつも一つ!/ Always, there's only one truth! _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
To be clear, the plan is to *not* ship separate repositories for ResilientStorage, NFV, HighAvailability, or RT. There may be components of those upstream channels that make it into BaseOS.
On Fri, Jun 21, 2019 at 8:45 AM Brian Stinson brian@bstinson.com wrote:
To be clear, the plan is to *not* ship separate repositories for ResilientStorage, NFV, HighAvailability, or RT. There may be components of those upstream channels that make it into BaseOS.
Given the python modules provided in those channels, they will definitely be needed in CentOS 8 or EPEL 8: I see python-boto3, python-botocore, and python-s3transfer, for example, in multiple RHEL 8 channels. Even if their segregation in multiple channels upstream was confusing or unwise, I'm not convinced it makes sense to try to merge them cleverly elsewhere in CentOS 8, especially if those channels ever differentiate when the individual modules are published upstream by RHEL. And since python modules *do* sometimes update, incompatibly with other python modules, I see a modest risk there.
Since "BaseOS" is its special little channel designed for the minimum core of highly stable, base system components like rpm itself, bind, and bzip2, I don't see how the frequently updated AWS published python modules would be appropriate there. Do you see a way such dynamically updated components would be appropriate there?
On Fri, Jun 21, 2019, at 19:17, Nico Kadel-Garcia wrote:
On Fri, Jun 21, 2019 at 8:45 AM Brian Stinson brian@bstinson.com wrote:
To be clear, the plan is to *not* ship separate repositories for ResilientStorage, NFV, HighAvailability, or RT. There may be components of those upstream channels that make it into BaseOS.
Given the python modules provided in those channels, they will definitely be needed in CentOS 8 or EPEL 8: I see python-boto3, python-botocore, and python-s3transfer, for example, in multiple RHEL 8 channels. Even if their segregation in multiple channels upstream was confusing or unwise, I'm not convinced it makes sense to try to merge them cleverly elsewhere in CentOS 8, especially if those channels ever differentiate when the individual modules are published upstream by RHEL. And since python modules *do* sometimes update, incompatibly with other python modules, I see a modest risk there.
Since "BaseOS" is its special little channel designed for the minimum core of highly stable, base system components like rpm itself, bind, and bzip2, I don't see how the frequently updated AWS published python modules would be appropriate there. Do you see a way such dynamically updated components would be appropriate there? _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
It's not a question of modular/non-modular. There are quite a few non-modular RPMs shipped in AppStream, for example. It's more of a question of release and lifecycle bundles.
Putting python-boto3, python-botocore, etc. (those are non-modular RPMs by the way) in BaseOS matches the expected lifecycle of those packages (although this is a wild guess on our part). AppStream might change within a traditional point-release, but the other upstream channels may not.
On Fri, Jun 21, 2019 at 8:26 PM Brian Stinson brian@bstinson.com wrote:
On Fri, Jun 21, 2019, at 19:17, Nico Kadel-Garcia wrote:
On Fri, Jun 21, 2019 at 8:45 AM Brian Stinson brian@bstinson.com wrote:
To be clear, the plan is to *not* ship separate repositories for ResilientStorage, NFV, HighAvailability, or RT. There may be components of those upstream channels that make it into BaseOS.
Given the python modules provided in those channels, they will definitely be needed in CentOS 8 or EPEL 8: I see python-boto3, python-botocore, and python-s3transfer, for example, in multiple RHEL 8 channels. Even if their segregation in multiple channels upstream was confusing or unwise, I'm not convinced it makes sense to try to merge them cleverly elsewhere in CentOS 8, especially if those channels ever differentiate when the individual modules are published upstream by RHEL. And since python modules *do* sometimes update, incompatibly with other python modules, I see a modest risk there.
Since "BaseOS" is its special little channel designed for the minimum core of highly stable, base system components like rpm itself, bind, and bzip2, I don't see how the frequently updated AWS published python modules would be appropriate there. Do you see a way such dynamically updated components would be appropriate there? _______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
It's not a question of modular/non-modular. There are quite a few non-modular RPMs shipped in AppStream, for example. It's more of a question of release and lifecycle bundles.
Putting python-boto3, python-botocore, etc. (those are non-modular RPMs by the way) in BaseOS matches the expected lifecycle of those packages (although this is a wild guess on our part). AppStream might change within a traditional point-release, but the other upstream channels may not.
I was referring to the "Python modules", not "RPM moduleS". Sorry for the confusion: I'm afraid that's going to happen for others, as well, with the RPMs for Python.
On Wed, Jun 19, 2019, at 11:18, Fabian Arrotin wrote:
On 19/06/2019 17:08, Brian Stinson wrote:
Hi Folks,
While we cycle through some of the remaining builds I'd like to start a discussion about what the CentOS 8 repo structure might look like. We need to think about what the repos look like on-disk, and how this might impact the mirrors.
Currently the thinking is this:
3 "core" repos:
- BaseOS (contains a small packageset of the base distribution)
- AppStream ("where the modules go")
- Devel ("-devel packages and other tools")
These descriptions are very much an oversimplification, but it's an ok model to work with.
Does that mean having the add-ons like ha/rs going either to BaseOS (for simple packages) or to AppStream (if built as modules") ?
There are some HA and RT components that are shipped to CentOS BaseOS, but these are separate from the HighAvailability and RT variants in upstream. We won't ship the HA or RT variants.
We plan to compose all of those repositories, and deliver updates in the same stream.
Just so that people realize : no *updates* repo anymore, so all combined : if you install from network $today, what you'll install $tomorrow will have all rolled-in directly
The x86_64 tree for the BaseOS repository will look something like this:
x86_64 ├── debug # Note: we will likely snip this out and move debugs to debuginfo.centos.org │ └── tree │ ├── Packages │ └── repodata ├── iso └── os ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── Packages └── repodata
The plan is to re-compose BaseOS and all the "release" media like cloud images/ISOs at the traditional point-release times, and refresh the repodata in between as updates come in.
Currently there are 3 primary architectures: x86_64, ppc64le, and aarch64, and 1 alternative architecture: armhfp. For CentOS 7 we split our primary and alternate architectures into /centos and /altarch on the mirrors to allow mirror admins to choose which trees to mirror. Is this something we'd like to continue?
If ppc64le and aarch64 were "promoted" as "primary arches" (and it's now the case even for 7 in fact, as we consider those, also used for cbs.centos.org SIG builds), I'd say +1 to "move" them back under /centos/ We can still have directories in /altarch/ with simple README file explaining where to find those for 8.
-- Fabian Arrotin The CentOS Project | https://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
*Attachments:*
- signature.asc
On 19/06/2019 17:43, Brian Stinson wrote:
<snip>
Does that mean having the add-ons like ha/rs going either to BaseOS (for simple packages) or to AppStream (if built as modules") ?
There are some HA and RT components that are shipped to CentOS BaseOS, but these are separate from the HighAvailability and RT variants in upstream. We won't ship the HA or RT variants.
I mean RS and not RT, but same concept/idea :) So do you mean that we have sources landed on git.centos.org, that we have eventually built but that we'll not ship ? If so that contrasts with what we used to do and would like to know why
On Wed, Jun 19, 2019, at 11:47, Fabian Arrotin wrote:
On 19/06/2019 17:43, Brian Stinson wrote:
<snip> >> >> Does that mean having the add-ons like ha/rs going either to BaseOS (for >> simple packages) or to AppStream (if built as modules") ? > > There are some HA and RT components that are shipped to CentOS BaseOS, > but these are separate from the HighAvailability and RT variants in > upstream. We won't ship the HA or RT variants. >
I mean RS and not RT, but same concept/idea :) So do you mean that we have sources landed on git.centos.org, that we have eventually built but that we'll not ship ? If so that contrasts with what we used to do and would like to know why
-- Fabian Arrotin The CentOS Project | https://www.centos.org gpg key: 56BEC54E | twitter: @arrfab
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
*Attachments:*
- signature.asc
To say this a different way. If it gets pushed to a c8 branch, it goes in BaseOS (again, an oversimplification because there are 1 or 2 exceptions, but close enough). I'd like to *not* maintain HighAvailability/ResilientStorage/etc. as separate variant repos in CentOS.
On Wed, Jun 19, 2019, at 12:09 PM, Brian Stinson wrote:
Hi Folks,
While we cycle through some of the remaining builds I'd like to start a discussion about what the CentOS 8 repo structure might look like. We need to think about what the repos look like on-disk, and how this might impact the mirrors.
Currently the thinking is this:
3 "core" repos:
- BaseOS (contains a small packageset of the base distribution)
- AppStream ("where the modules go")
- Devel ("-devel packages and other tools")
These descriptions are very much an oversimplification, but it's an ok model to work with.
We plan to compose all of those repositories, and deliver updates in the same stream. The x86_64 tree for the BaseOS repository will look something like this:
x86_64 ├── debug # Note: we will likely snip this out and move debugs to debuginfo.centos.org │ └── tree │ ├── Packages │ └── repodata ├── iso └── os ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── Packages └── repodata
I think this would be fine provided there is a frozen-in-time set of repodata that matches the current 'base' repo.
Going further, I'd propose something akin to https://snapshot.debian.org:
x86_64 ├── iso ├── appstream │ ├── Packages │ └── repodata # replaced daily ├── baseos │ ├── Packages │ └── repodata # replaced daily ├── devel │ ├── Packages │ └── repodata # replaced daily ├── snapshot │ └── 20190619T145736Z │ ├── appstream │ │ ├── Packages -> ../../../appstream/Packages │ │ └── repodata # frozen in time │ ├── baseos │ │ ├── Packages -> ../../../baseos/Packages │ │ └── repodata # frozen in time │ └── devel │ ├── Packages -> ../../../devel/Packages │ └── repodata # frozen in time └── kickstart ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── appstream │ ├── repodata # frozen in time │ └── Packages -> ../../appstream/Packages └── baseos ├── repodata # frozen in time └── Packages -> ../../baseos/Packages
Something similar could be done with hardlinks of individual RPMs instead of symlinks to a Packages directory that has all RPMS, but the idea is that on-disk there is only one copy of each RPM. The additional space taken by the snapshot/ directory would be solely the daily repo metadata. These could be moved to archive.centos.org with each new point release, same as happens today w/ the rest of the tree.
(Also useful might be a latest-only tree that could be a smaller item to rsync and grab only the latest version of each package, but that would have to be done with hardlinks rather than symlinks.)
Would something like above snapshot idea have any legs?
V/r, James Cassell
On Wed, Jun 19, 2019, at 14:23, James Cassell wrote:
On Wed, Jun 19, 2019, at 12:09 PM, Brian Stinson wrote:
Hi Folks,
While we cycle through some of the remaining builds I'd like to start a discussion about what the CentOS 8 repo structure might look like. We need to think about what the repos look like on-disk, and how this might impact the mirrors.
Currently the thinking is this:
3 "core" repos:
- BaseOS (contains a small packageset of the base distribution)
- AppStream ("where the modules go")
- Devel ("-devel packages and other tools")
These descriptions are very much an oversimplification, but it's an ok model to work with.
We plan to compose all of those repositories, and deliver updates in the same stream. The x86_64 tree for the BaseOS repository will look something like this:
x86_64 ├── debug # Note: we will likely snip this out and move debugs to debuginfo.centos.org │ └── tree │ ├── Packages │ └── repodata ├── iso └── os ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── Packages └── repodata
I think this would be fine provided there is a frozen-in-time set of repodata that matches the current 'base' repo.
Going further, I'd propose something akin to https://snapshot.debian.org:
x86_64 ├── iso ├── appstream │ ├── Packages │ └── repodata # replaced daily ├── baseos │ ├── Packages │ └── repodata # replaced daily ├── devel │ ├── Packages │ └── repodata # replaced daily ├── snapshot │ └── 20190619T145736Z │ ├── appstream │ │ ├── Packages -> ../../../appstream/Packages │ │ └── repodata # frozen in time │ ├── baseos │ │ ├── Packages -> ../../../baseos/Packages │ │ └── repodata # frozen in time │ └── devel │ ├── Packages -> ../../../devel/Packages │ └── repodata # frozen in time └── kickstart ├── EFI │ └── BOOT │ └── fonts ├── images │ └── pxeboot ├── isolinux ├── appstream │ ├── repodata # frozen in time │ └── Packages -> ../../appstream/Packages └── baseos ├── repodata # frozen in time └── Packages -> ../../baseos/Packages
Something similar could be done with hardlinks of individual RPMs instead of symlinks to a Packages directory that has all RPMS, but the idea is that on-disk there is only one copy of each RPM. The additional space taken by the snapshot/ directory would be solely the daily repo metadata. These could be moved to archive.centos.org with each new point release, same as happens today w/ the rest of the tree.
(Also useful might be a latest-only tree that could be a smaller item to rsync and grab only the latest version of each package, but that would have to be done with hardlinks rather than symlinks.)
Would something like above snapshot idea have any legs?
V/r, James Cassell
What's the use-case for the snapshot structure like this from a consumer's perspective?
We could (and probably should) post the firehose from our tooling for the release composes somewhere that is not the mirrors (we don't want to mess with mirroring logs). To give you sort of an idea of what that output looks like, take a peek at how Fedora does theirs: https://kojipkgs.fedoraproject.org/compose/30/
Our output would have different repositories, and a lot fewer artifacts but the structure is similar.
Hi Folks,
While we cycle through some of the remaining builds I'd like to start a discussion about what the CentOS 8 repo structure might look like. We need to think about what the repos look like on-disk, and how this might impact the mirrors.
I'll +1 KB on the install tree.
I have not strong feeling on how it is done on the technical side, but we need to be sure that we can install from a "minor release" and get the same output until the end of CentOS 8 lifecycle. Having a moving target for initial set of installation packages would be impossible to test at scale.
Not supporting real minor release, as it is already the case today is understood.
I like James snapshot idea, as this is something we manage internally and would save us work. And it will be easy to name a snapshot after a minor release.
By the way, I am only +0 on this at the moment. I can see why it looks attractive and makes certain work easier, but I am also aware it will makes deployment different/harder. But hey ~5 years ago we had a similar conversation on CentOS 7 minor version numbers with me arguing against going with 7.<YYYYMM> and wanting to keep 7.<minor>.
Is it still "required" ? :) Can we have 8.X again ? Can we have a 8.X symlink ? It's not a requirement but keeping thing simple is always good.
On 19/06/2019 23:36, Brian Stinson wrote:
Would something like above snapshot idea have any legs?
V/r, James Cassell
What's the use-case for the snapshot structure like this from a consumer's perspective?
I think doing static tree's once or twice a year to mark the boundary state is good enough from the provider ( CentOS Linux ) perspective, since there is the entire media and delivery stack that lines up with it.
We do it internally inside the CentOS infra, for local consumption and I think if someone is going to need this level of granularity, then as was mentioned earlier in the thread, a local Spacewalk install ( or something such ), would be the way to go.
regards
W dniu 19.06.2019 o 18:08, Brian Stinson pisze:
Currently there are 3 primary architectures: x86_64, ppc64le, and aarch64, and 1 alternative architecture: armhfp. For CentOS 7 we split our primary and alternate architectures into /centos and /altarch on the mirrors to allow mirror admins to choose which trees to mirror. Is this something we'd like to continue?
Having all 3 primary architectures in one place will make live easier for us, aarch64 developers.
Sure, some projects handle 'altarch|centos' stuff to take care of it for CentOS 7 but it is nothing hard to change with migration to CentOS 8. And much easier for all those which not yet handle them.
/altarch/8/ for armhfp is perfect. Info about aarch64/ppc64le being present in /centos/8/ would be lovely.