On 19/06/2019 19:22, Brian Stinson wrote:
>
> On Wed, Jun 19, 2019, at 13:07, Karanbir Singh wrote:
>> On 19/06/2019 17:45, Brian Stinson wrote:
>> >
>> > On Wed, Jun 19, 2019, at 11:32, Karanbir Singh wrote:
>> >> On 19/06/2019 17:18, Fabian Arrotin wrote:
>> >> >>
>> >> >> We plan to compose all of those repositories, and deliver updates
>> >> in the same stream.
>> >> >
>> >> > Just so that people realize : no *updates* repo anymore, so all
>> combined
>> >> > : if you install from network $today, what you'll install
>> $tomorrow will
>> >> > have all rolled-in directly
>> >> >
>> >>
>> >> that's not going to work - we need to retain the ability to deliver
>> >> reproducible installs.
>> >
>> > Can you clarify this? What "reproducible install" pattern is broken
>> here?
>> >
>> >>
>>
>> I need to be able to run installs against a mirror, weeks and months
>> apart and arrive at the same payload installed exactly.
>>
>> regards
>
> Is there something preventing you from doing that if we ship updates in
> the same repo as the 0-day release content?
>
yes,
if i yum install <httpd>; with the base only. I'd like to get the same
httpd, not the 3 versions removed in the updates that have now landed in
the same repo.
again, this maybe just a case of publishing a 2nd set of metadata rather
than retain the base rpm set, but we need to retain this functionality.
Wouldn't pinning versions be better here if that's what you need? If you took that same kickstart over to a RHEL machine, you'd get the updates over there.
Seems to me like delivering the updates separately goes against our community recommendations anyways (i.e. the first thing we say in irc is "did you run yum/dnf update?").