[Ci-users] Dirty nodes

Thu Jan 23 08:13:56 UTC 2020
Vipul Siddharth <vipul at redhat.com>

On Thu, Jan 23, 2020 at 12:23 PM Katerina Foniok <kkanova at redhat.com> wrote:
>
> Hello,
> we are still facing this problem on  devtools-ci-slave04. Our job is failing on conflicts during the installation of the dependencies of our project.
yes, since I have been travelling for DevConf, I couldn't keep a watch
on marking affected nodes in admin down state

@Katerina Foniok, When you get time, could you please send me the job
ids failing because of this issue?
It would help me tracking down the nodes (also conforming if the issue
is with other chassis as well)
>
> Thank you for taking care of that.
> Have a nice day!
>
> Katka
>
> On Mon, Jan 20, 2020 at 10:28 PM František Šumšal <frantisek at sumsal.cz> wrote:
>>
>>
>> On 1/20/20 10:12 PM, Vipul Siddharth wrote:
>> > so the issue is another one of the reinstallation problems where it
>> > rolls back to the previous version.
>> > In +Katerina Foniok's case, it contained old data
>> > and in +František Šumšal's case, it was not actually a C8 node but a
>> > C7 marked wrongly.
>> > The solution is to confirm twice (after installing the OS) in duffy..
>> > and the way we think that would be very easy to implement would be to
>> > check number of ssh key in authorized_keys. If it contains more in
>> > ready state, that means it still has old data and instead of ready,
>> > put it in some other state which I can take a look at later on.
>> > This would be a quick fix but till then, I will drain the pufty
>> > chassis (this is one with problems)
>> > and reset the chassis (this has fixed the problem in the past so far)
>> >
>>
>> Thank you very much for your analysis and prompt solution!
>>
>> --
>> PGP Key ID: 0xFB738CE27B634E4B
>>

--
Vipul Siddharth
He/His/Him
Fedora | CentOS CI Infrastructure Team
Red Hat
w: vipul.dev