Just a quick reminder that CentOS Linux 8 is going EOL in december (see
https://www.centos.org/centos-linux-eol/)
So based on that plan, we'll also remove it from available ci infra at
the same time as on mirror network, and if you'll ask for CentOS 8
you'll get no returned node to run your tests on.
Start switching your tests to run on 8-stream instead, available since
the beginning.
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
Due to a storage migration (see
https://pagure.io/centos-infra/issue/534) we'll have to shutdown and
restart the openshift CI cluster (OCP)
Migration is scheduled for """"Tuesday January 18th, 9:00 am UTC time"""".
You can convert to local time with $(date -d '2022-01-18 09:00 UTC')
The expected "downtime" is estimated to ~60 minutes , time needed to :
- shutdown openshift workers and control plane nodes
- last data sync between old and new nfs storage node
- restart openshift control plane and workers nodes
Thanks for your comprehending and patience.
on behalf of the Infra team,
--
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
Hi all CI infra tenants/users,
As a follow-up wrt previous mail about cico-workspace container update
(see
https://lists.centos.org/pipermail/ci-users/2021-October/002184.html) we
had a look at what was needed to rebase the cico-workspace container to
CentOS 8-stream.
We've built a container that you can start using with the following specs:
- based on centos 8-stream
- so having python3 (instead of python2), important for your jobs
- still providing python-cicoclient (python3 version)
- ansible 2.9.27
- git
This container is actually available as staging tag, so it's not *yet*
the one that is automatically pulled through your jenkins as
"cloud/container" node to run your jobs.
We'd like you to start testing your jobs against that updated
cico-workspace, at your earliest convenience, so that you have a chance
to eventually adapt your jobs, or ask us to add/change something in that
container.
See the next items
# How to test ?
The current DeploymentConfig used in openshift will provide jenkins,
itself configure to pull cico-workspace from quay.io
(quay.io/centosci/cico-workspace:latest)
But you can also , as you have full admin rights in your own jenkins
namespace, temporary switch to :staging tag.
To proceed, just log into your own jenkins apps on openshift/ocp.ci and
follow the next steps:
Manage Jenkins
Manage Nodes and Clouds (under System Configuration)
Configure Clouds (on the left menu)
Expand Kubernetes and under Pod template you'll see "cico-workspace"
"Pod Template details"
Containers
Just swap quay.io/centosci/cico-workspace:latest =>
quay.io/centosci/cico-workspace:staging
Apply/Save
You can now test your job (test job) and see if that still works or if
you need to adapt something
# How to revert ?
Assuming that you want to revert to :latest test (still c7 based), just
proceed as above and swap back to quay.io/centosci/cico-workspace:latest
#When will you enforce the change ?
We have no fixed date in mind but we'd like to enforce this early
december, which would give you enough time to fix your jobs (if there is
a need to). We can also provide an alternative through a different tag
like :c7 while we'd have c8s for :latest and you'd then be also able to
modify your jobs to just continue to use older c7 container (not adviced
though, but we'll continue to rebuild the container for security updates
automatically)
Should you have questions/remarks, feel free to chime in !
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
For people not following inititial infra ticket about this
(https://pagure.io/centos-infra/issue/441) here is a quick status about
CentOS Stream 9 availability in CI infra :
Starting from today, each tenant in CI infra can request Stream 9 for
the x86_64 architecture.
Due to lack of compatible hardware for ppc64le, it's not available yet
but we hope to be able to provide it before end of this year.
aarch64 is on the list of architectures to be tested and added (more on
that when it will be ready)
We already updated the python-cicoclient to correct version on the
previous Virtual Machines setup behind ci.centos.org but we still have
(next on the list) to update the cico-workspace container/image to
include newer python-cicoclient (see
http://mirror.centos.org/centos/7/infra/x86_64/infra-common/Packages/p/pyth…)
that is used in the openshift/ocp 4 cluster
Enjoy your tests on Stream 9 !
PS: at the infra side, we'll still have to also have an internal mirror
returned by mirrormanager as it's actually using external mirrors, so
dnf operations will be "slow" versus what you can see for 7/8/8-stream,
but it's on the list
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
Follow-up on the previous mail about supporting 9-stream in CI infra.
We have refreshed the cico-workspace container that is deployed as
jenkins agent container and it's available on
https://quay.io/repository/centosci/cico-workspace
That's the version that is automatically deployed in the OCP cluster.
Changes:
- updated to CentOS 7.9 + updates
- updated python-cicoclient to 0.4.6 (version that supports requesting
for '9-stream' nodes to duffy)
- added ansible 2.9.27 (per request from some tenants, avoiding to
have to install it themselves)
Next steps: We'd like to rebase that container to 8-stream in the near
future but we'd just need first to have python-cicoclient available as
pkg and tested with python3. Once we'll have such test image, we'll
rebase and keep you informed about it
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
Due to some needed software updates, we'll have to shutdown and restart
jenkins on the 'legacy' ci.centos.org setup (reminder : moving to
openshift would be great and we'll have to put a hard deadline for this
if some tenants are still on the old setup)
Migration is scheduled for """"Tuesday September 28th, 9:00 am UTC time"""".
You can convert to local time with $(date -d '2021-09-28 09:00 UTC')
The expected "downtime" is estimated to ~30 minutes , time needed to
put jenkins in "shutdown/quiet mode" , wait for some some jobs to
finish, update and restart jenkins.
Thanks for your comprehending and patience.
on behalf of the Infra team,
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
Raising awareness here. I am not sure if anyone uses this on our openshift
cluster but better safe than sorry.
---------- Forwarded message ---------
From: Jay Madison <madisonj(a)redhat.com>
Date: Wed, Sep 15, 2021 at 8:06 PM
Subject: IMPORTANT - ACTION MAY BE NEEDED - TravisCI security issues
To: <announce-list(a)redhat.com>
Hi all,
TL;DR: If your software development projects use TravisCI, please rotate
your secrets as soon as possible, but by no later than close of business,
September 17th. If you use TravisCI and have seen any first time
contributors between Sep 03 - Sep 10, 2021, follow the steps below in the “What
you need to do” section and contact infosec(a)redhat.com if you have any
questions. If you are not involved with software development activities,
using tools such as GitHub, GitLab or CI/CD tooling, this message very
likely does not apply to you, and you may ignore it.
What happened
Travis CI is a hosted continuous integration service used to build and test
software projects hosted on source code repositories such as GitHub.
On September 13, TravisCI released a security bulletin
<https://travis-ci.community/t/security-bulletin/12081>[1] advising that
secret environment variables of any public repositories may have been
leaked. This issue has been designated as CVE-2021-41077[2]
This issue was reported to TravisCI by the community on September 7 and a
patch was deployed by TravisCI on September 10. It is believed that all use
between September 3rd and 10th may have been subject to this vulnerability,
at a minimum. Given the limited information published by the Travis CI
Team, it is impossible to rule out a broader range of potential impact.
Information Security is in the process of scanning known Red Hat
repositories, but we need your help.
What you need to do
If you have a repository that uses TravisCI:
-
Rotate your secrets as soon as possible but by no later than close of
business September 17th.
-
Secrets refers to secure environment variables of all public repos
using TravisCI. Items such as Signing Keys, Access Credentials, and API
Keys.
-
Check for any external pull requests between September 3rd - 10th.
-
This includes first time pull request submitters, and people who
don’t submit often.
-
In particular, look for tags FIRST_TIME_CONTRIBUTOR, FIRST_TIMER, or
NONE.
-
To learn more about specific environment variables that may have been
exposed, please visit:
https://source.redhat.com/departments/it/it-information-security/wiki/septe…
-
For any pull requests of this nature, check the diff to see if it does
something unusual, for example, dumping env variables.
-
If you are unsure of any of these steps, notice anything unusual, and/or
unexpected activity please contact infosec(a)redhat.com.
Thank you for your diligence in helping us keep Red Hat secure. As always,
if there are any concerns, questions, or you wish to report an anomaly or
potential incident, please contact infosec(a)redhat.com directly.
Regards,
J.
Links:
[1] TravisCI security bulletin:
https://travis-ci.community/t/security-bulletin/12081
[2] https://cve.mitre.org/cgi-bin/cvename.cgi?name=CVE-2021-41077
--
Jay Madison
Vice President - Trust, Risk, Assurance & Compliance
Red Hat, Inc.
Forward any comments to mailto:memo-list@redhat.com for open discussion.
Due to a previous hardware issue
(https://lists.centos.org/pipermail/ci-users/2020-October/002124.html)
we had a workaround in place that is still in place but is actually
blocking a reorg of the infra used behind ci.centos.org.
We are announcing a maintenance window that will permit us to move
network switches, and reconfigure the network properly , allowing also
us to redeploy/load balance our workload across remaining available
infrastructure.
Migration is scheduled for """"Tuesday August 17th, 1:00 pm UTC time"""".
You can convert to local time with $(date -d '2021-08-17 13:00 UTC')
The expected "downtime" is estimated to ~60 minutes , time needed to
migrate services, reconfigure network switch and restart the nodes
connected to new network switch.
Impacted services in CI env:
- duffy API (unavailable during the move to other hypervisor, so
normally unavailable for ~10min) : no way to request a duffy node for
bare-metal tests
- jenkins (aka ci.centos.org) and all agents connected to it
- cloud.cico (opennebula infra used to provide Virtual Machines for CI
tenants)
Thanks for your comprehending and patience.
on behalf of the Infra team,
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
On Tue, Aug 17, 2021 at 7:27 PM Jonathan Beakley <jbeakley(a)redhat.com>
wrote:
> +Ant and Stefan, because Leigh is on PTO.
>
Please see Fabian's previous announcement of scheduled outage [0]
https://lists.centos.org/pipermail/ci-users/2021-August/002177.html
>
> -JB
>
> On Tue, Aug 17, 2021 at 9:54 AM Jonathan Beakley <jbeakley(a)redhat.com>
> wrote:
>
>> +Leigh and Aoife
>>
>> Leigh and Aoife,
>>
>> Can you help get some attention on this issue? Thank you,
>>
>> -JB
>>
>>
>> On Tue, Aug 17, 2021 at 9:35 AM Serhii Kryzhnii <skryzhni(a)redhat.com>
>> wrote:
>>
>>> Hello,
>>>
>>> https://ci.centos.org/ returning 503 Service Unaviable
>>>
>>> Is there anyone that can look into this?
>>>
>>> Thanks,
>>>
>>> --
>>> Serhii Kryzhnii
>>> Application SRE team, Service Delivery
>>> Red Hat
>>> skryzhni(a)redhat.com
>>>
>>
>>
>> --
>>
>> Jonathan Beakley
>>
>> Senior Engineering Manager, Site Reliability Engineering Services
>>
>> Red Hat <https://www.redhat.com/>, 314 Littleton Road, Westford, MA
>> 01886
>>
>> jbeakley(a)redhat.com | Mobile: 617-529-2828
>>
>> I respect your work-life balance. There is no need to answer this email
>> outside of your office hours.
>> <https://www.redhat.com/>
>>
>
>
> --
>
> Jonathan Beakley
>
> Senior Engineering Manager, Site Reliability Engineering Services
>
> Red Hat <https://www.redhat.com/>, 314 Littleton Road, Westford, MA 01886
>
> jbeakley(a)redhat.com | Mobile: 617-529-2828
>
> I respect your work-life balance. There is no need to answer this email
> outside of your office hours.
> <https://www.redhat.com/>
>
--
Vipul Siddharth
He/His/Him
Infrastructure @ Fedora and CentOS
<http://vipul.dev>
Hi folks,
Something to be aware of: https://pagure.io/centos-infra/issue/371
"'
There will be a planned outage for the CentOS CI OCP4 cluster on Monday
28th June for a period of 1 hour from 09:00 UTC
until 10:00 UTC but should not take the full hour to complete this work.
This work is to allow us to restore our primary storage `storage02` back
into production. This should drastically improve performance as we have
moved to a RAID10 configuration.
"'
Thank you, if you have any questions, please comment them on the ticket
--
David Kirwan
Software Engineer
Community Platform Engineering @ Red Hat
T: +(353) 86-8624108 IM: @dkirwan