We are pleased to announce a preliminary schedule for the online CentOS
Dojo prior to FOSDEM. Details are available here:
https://wiki.centos.org/Events/Dojo/FOSDEM2021
Registration will be free, and the link to registration is at the above
page.
Hi,
I'm working in a University and we have a no. of servers running CentOS Linux. We chose CentOS Linux because it is free of charge. Recently we learned that CentOS is shifting focus from CentOS Linux to CentOS Stream. We are very concerned if CentOS Stream requires subscription and if updates/patches for CentOS Stream requires subscription.
Can you help on my queries below:
Question 1
Referring to 'FAQ - CentOS Project shifts focus to CentOS Stream' (https://centos.org/distro-faq/<https://secure-web.cisco.com/12vidbGo4KDeMy7BxwP79RXZ8adFilOrBfFEAuqwSj1E2G…>):
Q2: What about the other releases of CentOS Linux?
A:
* Updates for the CentOS Linux 6 distribution ended November 30, 2020<https://secure-web.cisco.com/1AFEn0WSozOLnX5KWTyqLb81hqQsytVdWyuFXdazPwmDg4…>.
* Updates for the CentOS Linux 7 distribution continue as before until the end of support for RHEL7<https://secure-web.cisco.com/1Vjg5-b7gAc7XPdTp009SgavZ85rt-79Y60WEjKdZHhQl-…>.
* Updates for the CentOS Linux 8 distribution continue until the end of 2021; users can choose to switch over directly to CentOS Stream 8
* Updates for the CentOS Stream 8 distribution continue through the full RHEL support phase<https://secure-web.cisco.com/1Vjg5-b7gAc7XPdTp009SgavZ85rt-79Y60WEjKdZHhQl-…>.
We will not be producing a CentOS Linux 9, as a rebuild of RHEL 9. Instead CentOS Stream 9 fulfills this role. (See Q6 below regarding the overlap between concurrent streams.)
I don't quite understand the 4th point, 'Updates for the CentOS Stream 8 distribution continue through the full RHEL support phase<https://secure-web.cisco.com/1Vjg5-b7gAc7XPdTp009SgavZ85rt-79Y60WEjKdZHhQl-…>'. Can you further elaborate it? Does it mean that users will receive updates/patches for CentOS Stream 8 ONLY if they subscribe full RHEL support?
Question 2
In the announcement, it states 'If you are using CentOS Linux 8 in a production environment, and are concerned that CentOS Stream will not meet your needs, we encourage you to contact Red Hat about options.' Can you highlight what are the drawbacks of CentOS Stream causing not encouraged to run on a production environment?
Thanks very much in advance.
Regards,
Catherine Chan
[https://www.polyu.edu.hk/emaildisclaimer/PolyU_Email_Signature.jpg]
Disclaimer:
This message (including any attachments) contains confidential information intended for a specific individual and purpose. If you are not the intended recipient, you should delete this message and notify the sender and The Hong Kong Polytechnic University (the University) immediately. Any disclosure, copying, or distribution of this message, or the taking of any action based on it, is strictly prohibited and may be unlawful.
The University specifically denies any responsibility for the accuracy or quality of information obtained through University E-mail Facilities. Any views and opinions expressed are only those of the author(s) and do not necessarily represent those of the University and the University accepts no liability whatsoever for any losses or damages incurred or caused to any party as a result of the use of such information.
Hi Srijan,
I will redeploy the scenario and I will check if the steps include that package.
Shouldn't glusterfs-selinux a dependency ?
Best Regards,
Strahil Nikolov
В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar <ssivakum(a)redhat.com> написа:
Hi Strahil,
Selinux policies and rules have to be added for gluster processes to work as intended when selinux is in enforced mode. Could you confirm if you've installed the glusterfs-selinux package in the nodes ?
If not then you can check out the repo at https://github.com/gluster/glusterfs-selinux.
Regards,
Srijan
On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov <hunter86_bg(a)yahoo.com> wrote:
> Did anyone receive that e-mail ?
> Any hints ?
>
> Best Regards,
> Strahil Nikolov
>
> В 19:05 +0000 на 30.12.2020 (ср), Strahil Nikolov написа:
>> Hello All,
>>
>> I have been testing Geo Replication on Gluster v 8.3 ontop CentOS
>> 8.3.
>> It seems that everything works untill SELINUX is added to the
>> equasion.
>>
>> So far I have identified several issues on the Master Volume's nodes:
>> - /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
>> the target that it is pointing to. For details check
>> https://bugzilla.redhat.com/show_bug.cgi?id=1911133
>>
>> - SELINUX prevents /usr/bin/ssh from search access to
>> /var/lib/glusterd/geo-replication/secret.pem
>>
>> - SELinux is preventing /usr/bin/ssh from search access to .ssh
>>
>> - SELinux is preventing /usr/bin/ssh from search access to
>> /tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
>>
>> Note: Using 'semanage fcontext' doesn't work due to the fact that
>> files created are inheriting the SELINUX context of the parent dir
>> and you need to restorecon after every file creation by the geo-
>> replication process.
>>
>> - SELinux is preventing /usr/bin/rsync from search access on
>> .gfid/00000000-0000-0000-0000-000000000001
>>
>> Obviously, those needs fixing before anyone is able to use Geo-
>> Replication with SELINUX enabled on the "master" volume nodes.
>>
>> Should I open a bugzilla at bugzilla.redhat.com for the selinux
>> policy?
>>
>> Further details:
>> [root@glustera ~]# cat /etc/centos-release
>> CentOS Linux release 8.3.2011
>>
>> [root@glustera ~]# rpm -qa | grep selinux | sort
>> libselinux-2.9-4.el8_3.x86_64
>> libselinux-utils-2.9-4.el8_3.x86_64
>> python3-libselinux-2.9-4.el8_3.x86_64
>> rpm-plugin-selinux-4.14.3-4.el8.x86_64
>> selinux-policy-3.14.3-54.el8.noarch
>> selinux-policy-devel-3.14.3-54.el8.noarch
>> selinux-policy-doc-3.14.3-54.el8.noarch
>> selinux-policy-targeted-3.14.3-54.el8.noarch
>>
>> [root@glustera ~]# rpm -qa | grep gluster | sort
>> centos-release-gluster8-1.0-1.el8.noarch
>> glusterfs-8.3-1.el8.x86_64
>> glusterfs-cli-8.3-1.el8.x86_64
>> glusterfs-client-xlators-8.3-1.el8.x86_64
>> glusterfs-fuse-8.3-1.el8.x86_64
>> glusterfs-geo-replication-8.3-1.el8.x86_64
>> glusterfs-server-8.3-1.el8.x86_64
>> libglusterd0-8.3-1.el8.x86_64
>> libglusterfs0-8.3-1.el8.x86_64
>> python3-gluster-8.3-1.el8.x86_64
>>
>>
>> [root@glustera ~]# gluster volume info primary
>>
>> Volume Name: primary
>> Type: Distributed-Replicate
>> Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7
>> Status: Started
>> Snapshot Count: 0
>> Number of Bricks: 5 x 3 = 15
>> Transport-type: tcp
>> Bricks:
>> Brick1: glustera:/bricks/brick-a1/brick
>> Brick2: glusterb:/bricks/brick-b1/brick
>> Brick3: glusterc:/bricks/brick-c1/brick
>> Brick4: glustera:/bricks/brick-a2/brick
>> Brick5: glusterb:/bricks/brick-b2/brick
>> Brick6: glusterc:/bricks/brick-c2/brick
>> Brick7: glustera:/bricks/brick-a3/brick
>> Brick8: glusterb:/bricks/brick-b3/brick
>> Brick9: glusterc:/bricks/brick-c3/brick
>> Brick10: glustera:/bricks/brick-a4/brick
>> Brick11: glusterb:/bricks/brick-b4/brick
>> Brick12: glusterc:/bricks/brick-c4/brick
>> Brick13: glustera:/bricks/brick-a5/brick
>> Brick14: glusterb:/bricks/brick-b5/brick
>> Brick15: glusterc:/bricks/brick-c5/brick
>> Options Reconfigured:
>> changelog.changelog: on
>> geo-replication.ignore-pid-check: on
>> geo-replication.indexing: on
>> storage.fips-mode-rchecksum: on
>> transport.address-family: inet
>> nfs.disable: on
>> performance.client-io-threads: off
>> cluster.enable-shared-storage: enable
>>
>> I'm attaching the audit log and sealert analysis from glustera (one
>> of the 3 nodes consisting of the 'master' volume).
>>
>>
>> Best Regards,
>> Strahil Nikolov
>
>>
>
> -------
>
> Community Meeting Calendar:
> Schedule -
> Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC
> Bridge: https://meet.google.com/cpu-eiue-hvk
>
> Gluster-devel mailing list
> Gluster-devel(a)gluster.org
> https://lists.gluster.org/mailman/listinfo/gluster-devel
>
>
>
We have rebased a large part of the glibc dynamic loader and the x86 CPU
detection infrastructure to the current upstream version. These changes
are going to land in CentOS Stream soon.
These glibc changes mean that going forward, run-time selection of
optimized shared object implementations is possible for all x86
processors (e.g., generic AVX2-optimized code that gets loaded on EPYC
and Xeon Scalable processors). This is primarily being done to allow
user application code to leverage this functionality. At this time, we
are not adjusting or optimizing CentOS Stream packages themselves.
Convenient developer support depends on GCC Toolset 11 and LLVM Toolset
12, which are yet to be released (even in their upstream versions). I’m
working on some form of documentation on how to produce builds that are
compatible with this feature for earlier toolchain versions.
Other changes in the new glibc version include:
* Backwards-compatible ld.so cache format changes to support
glibc-hwcaps
* --argv0 support in ld.so
* DT_AUDIT support: The --audit option in binutils ld finally works as
expected.
* Enhanced --help output in ld.so: “/lib64/ld-linux-x86-64.so.2 --help”
shows library search path information.
* TLS allocation improvements: dlopen is able to load shared objects
that use initial-exec TLS in more cases. There is a new
glibc.rtld.optional_static_tls tunable to support exceptionally large
initial-exec TLS usage after dlopen.
* The CPU tunable namespace has been renamed from “glibc.tune” to
“glibc.cpu”.
If you have any questions, please feel free to ask on-list.
Thanks,
Florian
--
Red Hat GmbH, https://de.redhat.com/ , Registered seat: Grasbrunn,
Commercial register: Amtsgericht Muenchen, HRB 153243,
Managing Directors: Charles Cachera, Brian Klemm, Laurie Krebs, Michael O'Neill
Hello,
The
https://git.centos.org/rpms/gnutls/c/3ec527a2c4ad3bad520b1004a50c075d94cc80…
has been imported month ago, but there was just build for c8 from today and
none for c8s.
Is the process of building automated, or who is responsible for building
the updates for c8s?
Thanks in advance for your help!
Did anyone receive that e-mail ?
Any hints ?
Best Regards,
Strahil Nikolov
В 19:05 +0000 на 30.12.2020 (ср), Strahil Nikolov написа:
> Hello All,
>
> I have been testing Geo Replication on Gluster v 8.3 ontop CentOS
> 8.3.
> It seems that everything works untill SELINUX is added to the
> equasion.
>
> So far I have identified several issues on the Master Volume's nodes:
> - /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
> the target that it is pointing to. For details check
> https://bugzilla.redhat.com/show_bug.cgi?id=1911133
>
> - SELINUX prevents /usr/bin/ssh from search access to
> /var/lib/glusterd/geo-replication/secret.pem
>
> - SELinux is preventing /usr/bin/ssh from search access to .ssh
>
> - SELinux is preventing /usr/bin/ssh from search access to
> /tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
>
> Note: Using 'semanage fcontext' doesn't work due to the fact that
> files created are inheriting the SELINUX context of the parent dir
> and you need to restorecon after every file creation by the geo-
> replication process.
>
> - SELinux is preventing /usr/bin/rsync from search access on
> .gfid/00000000-0000-0000-0000-000000000001
>
> Obviously, those needs fixing before anyone is able to use Geo-
> Replication with SELINUX enabled on the "master" volume nodes.
>
> Should I open a bugzilla at bugzilla.redhat.com for the selinux
> policy?
>
> Further details:
> [root@glustera ~]# cat /etc/centos-release
> CentOS Linux release 8.3.2011
>
> [root@glustera ~]# rpm -qa | grep selinux | sort
> libselinux-2.9-4.el8_3.x86_64
> libselinux-utils-2.9-4.el8_3.x86_64
> python3-libselinux-2.9-4.el8_3.x86_64
> rpm-plugin-selinux-4.14.3-4.el8.x86_64
> selinux-policy-3.14.3-54.el8.noarch
> selinux-policy-devel-3.14.3-54.el8.noarch
> selinux-policy-doc-3.14.3-54.el8.noarch
> selinux-policy-targeted-3.14.3-54.el8.noarch
>
> [root@glustera ~]# rpm -qa | grep gluster | sort
> centos-release-gluster8-1.0-1.el8.noarch
> glusterfs-8.3-1.el8.x86_64
> glusterfs-cli-8.3-1.el8.x86_64
> glusterfs-client-xlators-8.3-1.el8.x86_64
> glusterfs-fuse-8.3-1.el8.x86_64
> glusterfs-geo-replication-8.3-1.el8.x86_64
> glusterfs-server-8.3-1.el8.x86_64
> libglusterd0-8.3-1.el8.x86_64
> libglusterfs0-8.3-1.el8.x86_64
> python3-gluster-8.3-1.el8.x86_64
>
>
> [root@glustera ~]# gluster volume info primary
>
> Volume Name: primary
> Type: Distributed-Replicate
> Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7
> Status: Started
> Snapshot Count: 0
> Number of Bricks: 5 x 3 = 15
> Transport-type: tcp
> Bricks:
> Brick1: glustera:/bricks/brick-a1/brick
> Brick2: glusterb:/bricks/brick-b1/brick
> Brick3: glusterc:/bricks/brick-c1/brick
> Brick4: glustera:/bricks/brick-a2/brick
> Brick5: glusterb:/bricks/brick-b2/brick
> Brick6: glusterc:/bricks/brick-c2/brick
> Brick7: glustera:/bricks/brick-a3/brick
> Brick8: glusterb:/bricks/brick-b3/brick
> Brick9: glusterc:/bricks/brick-c3/brick
> Brick10: glustera:/bricks/brick-a4/brick
> Brick11: glusterb:/bricks/brick-b4/brick
> Brick12: glusterc:/bricks/brick-c4/brick
> Brick13: glustera:/bricks/brick-a5/brick
> Brick14: glusterb:/bricks/brick-b5/brick
> Brick15: glusterc:/bricks/brick-c5/brick
> Options Reconfigured:
> changelog.changelog: on
> geo-replication.ignore-pid-check: on
> geo-replication.indexing: on
> storage.fips-mode-rchecksum: on
> transport.address-family: inet
> nfs.disable: on
> performance.client-io-threads: off
> cluster.enable-shared-storage: enable
>
> I'm attaching the audit log and sealert analysis from glustera (one
> of the 3 nodes consisting of the 'master' volume).
>
>
> Best Regards,
> Strahil Nikolov
>