Dear community,
I would like to bring up this topic once again because it is relevant
for the well-being of CentOS visual identity, and its future
improvements on the long-term. This mail is probably for Red Hat
Liaison, considering the legalities involved in relation to CentOS
branding matters. Nevertheless, I would like to keep the discussion
open to collect the vast majority of opinions possible about it.
Considering the CentOS brand is presently a registered trademark of Red
Hat, the exact questions are:
1. Related to CentOS brand changes, and design improvements: What does
Red Hat allow the CentOS community to do, and not to do? Here, please,
consider the legal and not-legal matters.
2. Would it be possible for Red Hat to explicitly set the license under
which the CentOS brand (creative/design) work is released, so to
grantee its openness inside the CentOS community? If not, please,
elaborate why, and share the expected process to follow in order to
keep the brand design relevant through time.
I deliberately have collected some thoughts[1] about the recent CentOS
brand actualization process but am not sure if they are aligned with
Red Hat needs and expectations. The goal here would be to make a very
clean and simple statement about how much autonomy does the CentOS
community have over its own brand. Also, complement the CentOS
Trademark Guidelines[2] document with such information, since there
isn't mention of it at the moment.
[1] https://gitlab.com/areguera/centos-brand
[2] https://www.centos.org/legal/trademarks/
Appreciate your comments.
Best regards.
--
Alain Reguera Delgado <alain.reguera(a)gmail.com>
Some time ago, we started to suffer from spammers/load/etc against
wiki.centos.org
We tried to implement various techniques , found on the moin wiki or
elsewhere but we have to face it : moin (http://moinmo.in/) then
underlying app for wiki.centos.org, is now unmaintained. Latest version
(that we run) is python 2.7 compatible but no plan for python 3, etc, etc
For that reason, some SIGs (including Infra SIG), moved already their
doc to markdown format, easy to write/review through PR and
automatically rendered.
The question is so : do we want/need to keep wiki.centos.org running ?
Most of the content (if not almost 99%) is outdated/unmaintained at this
stage, and deciding what to do about content , and how/where to migrate
it would make sense.
That's tied to an old infra ticket open a long time ago
(https://pagure.io/centos-infra/issue/793) when we had to enabled
mod_qos, and other workarounds to just try to keep it running and
functional.
Let's start a thread/discussion !
@Shaun : as Docs leader, your voice/opinion/feedback would be greatly
appreciated ;-)
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
Hi,
We are currently suffering from a flapping network connectivity to the
main DC where majority of the CentOS Infra is hosted.
After some internal discussion, we confirmed that upstream link provider
is aware of the issue and they are looking for a fix (but no ETA)
Impacted services:
- centos ci
- git.centos.org
- cbs.centos.org
- mirror.centos.org (downstream consumer and having issues pulling content)
- mirror.stream.centos.org (downstream consumer and having issues
pulling content)
- buildlogs.centos.org (same reason)
We'll post an update when this will be finally resolved
--
Fabian Arrotin
The CentOS Project | https://www.centos.org
gpg key: 17F3B7A1 | twitter: @arrfab
We have recently been made aware that the internal ABI changes due to
LD_AUDIT fixes cause processes launched concurrently during the update
to crash:
glibc: Upgrading to glibc-2.28-209.el8.x86_64 causes segfaults during
concurrent process launch
<https://bugzilla.redhat.com/show_bug.cgi?id=2119304>
In glibc-2.28-211.el8, we have made the ABI changes more compatible with
*older* versions (before -208), but this might cause yet another bumpy
update from -209 or -210 to -211 or later. (The version ranges are
slightly different for s390x; see the bug. ppc64le has a different
issue caused by RPM file writing order, but no crashes.)
CentOS 9 Stream should be less sensitive to these issues due to the
merged libpthread/libdl.
We are putting some checks into place to catch this earlier. In
general, it might not be possible to avoid such issues completely,
particularly on CentOS 8 Stream. The workaround is to do updates in
quasi-single-user mode, with as little running on the system as
possible.
Thanks,
Florian
As part of OpenStack deployments we deploy RabbitMQ. During current
cycle I looked at moving from CentOS Stream 8 to 9.
And RabbitMQ is a problem.
When I boot CentOS Stream 9 system and then use just built 'rabbitmq'
container memory use of "/usr/lib64/erlang/erts-12.3.2.2/bin/beam.smp"
process goes up to 1.6GB ram:
(rabbitmq)[root@kolla-cs9 /]# rabbitmq-diagnostics memory_breakdown
Reporting memory breakdown on node rabbit@kolla-cs9...
other_system: 1.6233 gb (68.59%)
allocated_unused: 0.5164 gb (21.82%)
binary: 0.1551 gb (6.55%)
code: 0.0356 gb (1.51%)
other_proc: 0.0184 gb (0.78%)
connection_other: 0.0037 gb (0.16%)
other_ets: 0.0036 gb (0.15%)
queue_procs: 0.0021 gb (0.09%)
plugins: 0.0021 gb (0.09%)
connection_readers: 0.0021 gb (0.09%)
atom: 0.0014 gb (0.06%)
mgmt_db: 0.001 gb (0.04%)
metrics: 8.0e-4 gb (0.03%)
connection_writers: 4.0e-4 gb (0.02%)
connection_channels: 4.0e-4 gb (0.02%)
mnesia: 3.0e-4 gb (0.01%)
msg_index: 0.0 gb (0.0%)
quorum_ets: 0.0 gb (0.0%)
stream_queue_procs: 0.0 gb (0.0%)
stream_queue_replica_reader_procs: 0.0 gb (0.0%)
queue_slave_procs: 0.0 gb (0.0%)
quorum_queue_procs: 0.0 gb (0.0%)
stream_queue_coordinator_procs: 0.0 gb (0.0%)
reserved_unallocated: 0.0 gb (0.0%)
If I boot the same container on Debian host then same process uses 0.2GB
ram:
(rabbitmq)[root@debian /]# rabbitmq-diagnostics memory_breakdown
Reporting memory breakdown on node rabbit@debian...
binary: 0.2787 gb (70.2%)
code: 0.0355 gb (8.93%)
other_system: 0.0255 gb (6.44%)
other_proc: 0.0209 gb (5.26%)
connection_other: 0.0108 gb (2.72%)
mgmt_db: 0.0073 gb (1.84%)
plugins: 0.0048 gb (1.2%)
other_ets: 0.0037 gb (0.93%)
connection_readers: 0.003 gb (0.75%)
queue_procs: 0.0028 gb (0.7%)
atom: 0.0015 gb (0.37%)
metrics: 0.0011 gb (0.28%)
connection_channels: 8.0e-4 gb (0.21%)
mnesia: 4.0e-4 gb (0.1%)
connection_writers: 2.0e-4 gb (0.05%)
msg_index: 0.0 gb (0.01%)
quorum_ets: 0.0 gb (0.01%)
stream_queue_procs: 0.0 gb (0.0%)
stream_queue_replica_reader_procs: 0.0 gb (0.0%)
queue_slave_procs: 0.0 gb (0.0%)
quorum_queue_procs: 0.0 gb (0.0%)
stream_queue_coordinator_procs: 0.0 gb (0.0%)
allocated_unused: 0.0 gb (0.0%)
reserved_unallocated: 0.0 gb (0.0%)
So I started checking. Maybe it is a problem of Erlang/RabbitMQ packages
provided by RabbitMQ team?
Booted CS9 system and deployed OpenStack using Debian based containers.
Again 1.6GB memory use.
So let build CS9 based containers using Erlang/RabbitMQ from CentOS
Stream 9 "messaging/rabbitmq-38" repository. Again 1.6GB memory use.
I am wondering what is a reason?
And what is proper solution?
For now I only know not-acceptable solution: deploy OpenStack using
Debian or Ubuntu and forget about CentOS Stream 9 and rest of RHEL 9 family.
======================================================
#centos-meeting: CentOS Cloud SIG meeting (2022-08-11)
======================================================
Meeting started by amoralej at 15:01:25 UTC. The full logs are available
athttps://www.centos.org/minutes/2022/August/centos-meeting.2022-08-11-15.0…
.
Meeting summary
---------------
* OKD/SCOS Update (amoralej, 15:04:38)
* OKD (OpenShift community edition) team is working in a new variant
that runs on CentOS Stream 9 CoreOS (amoralej, 15:07:25)
* OKD team is looking for the best way to provide scos images so that
users can use them to run OpenShift community edition or other
things (amoralej, 15:17:35)
* LINK: https://wiki.centos.org/Events/Dojo/DevConfUS2022
(mcraychee, 15:24:22)
* LINK: http://mirror.stream.centos.org/9-stream/NFV/ (aleskandro,
15:38:19)
* LINK: http://mirror.stream.centos.org/9-stream/NFV/ (amoralej,
15:42:09)
* AGREED: SCOS team will join CloudSIG, welcome! (amoralej, 15:45:02)
* ACTION: someone from the CoreOS team should be designated as
CloudSIG co-chair (amoralej, 15:45:53)
* LINK:
https://pbs.twimg.com/profile_images/1455901978194690048/f1_Z6llN_400x400.j…
(lorbus, 15:55:45)
Meeting ended at 16:00:40 UTC.
Action Items
------------
* someone from the CoreOS team should be designated as CloudSIG co-chair
Action Items, by person
-----------------------
* **UNASSIGNED**
* someone from the CoreOS team should be designated as CloudSIG
co-chair
People Present (lines said)
---------------------------
* amoralej (68)
* lorbus (56)
* travier (23)
* spotz84 (22)
* aleskandro (9)
* mcraychee (9)
* centguard (6)
* davdunc[m (2)
Generated by `MeetBot`_ 0.4
.. _`MeetBot`: http://wiki.debian.org/MeetBot
Hi everyone,
This is a weekly report from the CPE (Community Platform Engineering)
Team. If you have any questions or feedback, please respond to this
report or contact us on #redhat-cpe channel on libera.chat
(https://libera.chat/)
Week: 15th August - 19th August 2022
If you wish to read this in form of a blog post, check the post on
Fedora community blog:
https://communityblog.fedoraproject.org/cpe-weekly-update---week-33-2022/
# Highlights of the week
## Infrastructure & Release Engineering
Goal of this Initiative
-----------------------
Purpose of this team is to take care of day to day business regarding
CentOS and Fedora Infrastructure and Fedora release engineering work.
It’s responsible for services running in Fedora and CentOS
infrastructure and preparing things for the new Fedora release (mirrors,
mass branching, new namespaces etc.).
The ARC (which is a subset of the team) investigates possible
initiatives that CPE might take on.
Link to planning board: https://zlopez.fedorapeople.org/I&R-2022-08-17.pdf
Link to docs: https://docs.fedoraproject.org/en-US/infra/
Update
------
### Fedora Infra
* Mass update/reboot cycle this week (stg/nonoutage done, outage later
today)
* Freeze for f37 beta starts next week
### CentOS Infra including CentOS CI
* Discussion around the CentOS Stream infra hand over
* New tasks for the CI infra migration
* S3 bucket for the Stream CoreOS effort
* Some infra projects moved from gitea to gitlab
### Release Engineering
* Openh264 composes fo f36,37,38 send to cisco
* Package retirement issues after the branching, thanks to human error
## CentOS Stream
Goal of this Initiative
-----------------------
This initiative is working on CentOS Stream/Emerging RHEL to make this
new distribution a reality. The goal of this initiative is to prepare
the ecosystem for the new CentOS Stream.
Updates
-------
* Face to face meeting in Boston.
* Penultimate parts of Module process sync. Between el8 and el9.
## EPEL
Goal of this initiative
-----------------------
Extra Packages for Enterprise Linux (or EPEL) is a Fedora Special
Interest Group that creates, maintains, and manages a high quality set
of additional packages for Enterprise Linux, including, but not limited
to, Red Hat Enterprise Linux (RHEL), CentOS and Scientific Linux (SL),
Oracle Linux (OL).
EPEL packages are usually based on their Fedora counterparts and will
never conflict with or replace packages in the base Enterprise Linux
distributions. EPEL uses much of the same infrastructure as Fedora,
including buildsystem, bugzilla instance, updates manager, mirror
manager and more.
Updates
-------
* EPEL9 is up to 7339 (+106) packages from 3278 (+71) source packages
* Found the bloaty package was uninstallable because of a libre2 soname
fix, rebuilding it fixed the issue.
## FMN replacement
Goal of this initiative
-----------------------
FMN (Fedora-Messaging-Notification) is a web application allowing users
to create filters on messages sent to (currently) fedmsg and forward
these as notifications on to email or IRC.
The goal of the initiative is mainly to add fedora-messaging schemas,
create a new UI for a better user experience and create a new service to
triage incoming messages to reduce the current message delivery lag
problem. Community will profit from speedier notifications based on own
preferences (IRC, Matrix, Email), unified fedora project to one message
service and human-readable results in Datagrepper.
Also, CPE tech debt will be significantly reduced by dropping the
maintenance of fedmsg altogether.
Updates
-------
* Frontend
* Use CoreUI components
* Set up i18n
* Initial version of a “New Rule” page
* Authentication integration FE/BE (ongoing)
* Backend: SQLAlchemy integration (ongoing))
Kindest regards,
CPE Team
Hello everyone,
This is a friendly reminder of the current and upcoming status of CentOS CI
changes (check [1]).
Projects that opted-in for continuing on CentOS CI have been migrated, and
the new Duffy API is available. With that, *phase 0 has been completed*.
Regarding *phase 1*, we are still working on a permanent fix for the DB
Concurrency issues [2]. Also, as for our OpenShift new deployment, we have
a staging environment up and running, and it should be available at
the beginning of September 2022.
In October 2022 we begin *phase 2* when we will work through the following
items (these were also previously communicated in [1]):
- legacy/compatibility API endpoint will handover EC2 instances instead
of local seamicro nodes (VMs vs bare metal)
- bare-metal options will be available through the new API only
- legacy seamicro and aarch64/ThunderX hardware are decommissioned
- only remaining "on-premises" option is ppc64le (local cloud)Feel free
to reach out if you have any questions or concerns
The final deadline for decommissioning the old infrastructure (*phase 3*)
is *December 2022*. We will be communicating further until then, and
meanwhile, reach out to any of us in case you have any questions.
Regards,
[1] [ci-users] Changes on CentOS CI and next steps:
https://lists.centos.org/pipermail/ci-users/2022-June/004547.html
[2] DB Concurrency issues: https://github.com/CentOS/duffy/issues/523
--
Camila Granella
Associate Manager, Software Engineering
Red Hat <https://www.redhat.com/>
@Red Hat <https://twitter.com/redhat> Red Hat
<https://www.linkedin.com/company/red-hat> Red Hat
<https://www.facebook.com/RedHatInc>
<https://www.redhat.com/>