Hi Srijan,
I will redeploy the scenario and I will check if the steps include that package. Shouldn't glusterfs-selinux a dependency ?
Best Regards, Strahil Nikolov
В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar ssivakum@redhat.com написа:
Hi Strahil,
Selinux policies and rules have to be added for gluster processes to work as intended when selinux is in enforced mode. Could you confirm if you've installed the glusterfs-selinux package in the nodes ? If not then you can check out the repo at https://github.com/gluster/glusterfs-selinux.
Regards, Srijan
On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov hunter86_bg@yahoo.com wrote:
Did anyone receive that e-mail ? Any hints ?
Best Regards, Strahil Nikolov
В 19:05 +0000 на 30.12.2020 (ср), Strahil Nikolov написа:
Hello All,
I have been testing Geo Replication on Gluster v 8.3 ontop CentOS 8.3. It seems that everything works untill SELINUX is added to the equasion.
So far I have identified several issues on the Master Volume's nodes:
- /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
the target that it is pointing to. For details check https://bugzilla.redhat.com/show_bug.cgi?id=1911133
- SELINUX prevents /usr/bin/ssh from search access to
/var/lib/glusterd/geo-replication/secret.pem
SELinux is preventing /usr/bin/ssh from search access to .ssh
SELinux is preventing /usr/bin/ssh from search access to
/tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
Note: Using 'semanage fcontext' doesn't work due to the fact that files created are inheriting the SELINUX context of the parent dir and you need to restorecon after every file creation by the geo- replication process.
- SELinux is preventing /usr/bin/rsync from search access on
.gfid/00000000-0000-0000-0000-000000000001
Obviously, those needs fixing before anyone is able to use Geo- Replication with SELINUX enabled on the "master" volume nodes.
Should I open a bugzilla at bugzilla.redhat.com for the selinux policy?
Further details: [root@glustera ~]# cat /etc/centos-release CentOS Linux release 8.3.2011
[root@glustera ~]# rpm -qa | grep selinux | sort libselinux-2.9-4.el8_3.x86_64 libselinux-utils-2.9-4.el8_3.x86_64 python3-libselinux-2.9-4.el8_3.x86_64 rpm-plugin-selinux-4.14.3-4.el8.x86_64 selinux-policy-3.14.3-54.el8.noarch selinux-policy-devel-3.14.3-54.el8.noarch selinux-policy-doc-3.14.3-54.el8.noarch selinux-policy-targeted-3.14.3-54.el8.noarch
[root@glustera ~]# rpm -qa | grep gluster | sort centos-release-gluster8-1.0-1.el8.noarch glusterfs-8.3-1.el8.x86_64 glusterfs-cli-8.3-1.el8.x86_64 glusterfs-client-xlators-8.3-1.el8.x86_64 glusterfs-fuse-8.3-1.el8.x86_64 glusterfs-geo-replication-8.3-1.el8.x86_64 glusterfs-server-8.3-1.el8.x86_64 libglusterd0-8.3-1.el8.x86_64 libglusterfs0-8.3-1.el8.x86_64 python3-gluster-8.3-1.el8.x86_64
[root@glustera ~]# gluster volume info primary Volume Name: primary Type: Distributed-Replicate Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7 Status: Started Snapshot Count: 0 Number of Bricks: 5 x 3 = 15 Transport-type: tcp Bricks: Brick1: glustera:/bricks/brick-a1/brick Brick2: glusterb:/bricks/brick-b1/brick Brick3: glusterc:/bricks/brick-c1/brick Brick4: glustera:/bricks/brick-a2/brick Brick5: glusterb:/bricks/brick-b2/brick Brick6: glusterc:/bricks/brick-c2/brick Brick7: glustera:/bricks/brick-a3/brick Brick8: glusterb:/bricks/brick-b3/brick Brick9: glusterc:/bricks/brick-c3/brick Brick10: glustera:/bricks/brick-a4/brick Brick11: glusterb:/bricks/brick-b4/brick Brick12: glusterc:/bricks/brick-c4/brick Brick13: glustera:/bricks/brick-a5/brick Brick14: glusterb:/bricks/brick-b5/brick Brick15: glusterc:/bricks/brick-c5/brick Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.enable-shared-storage: enable
I'm attaching the audit log and sealert analysis from glustera (one of the 3 nodes consisting of the 'master' volume).
Best Regards, Strahil Nikolov
Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
Hi Srijan,
I just checked the gluster repo 'centos-gluster8' and it seems that there is no gluster package containing 'selinux' in the name . Same is valid for CentOS 7. Maybe I have to address that to the CentOS Storage SIG ?
Best Regards, Strahil Nikolov
В сряда, 6 януари 2021 г., 10:44:37 Гринуич+2, Strahil Nikolov via CentOS-devel centos-devel@centos.org написа:
Hi Srijan,
I will redeploy the scenario and I will check if the steps include that package. Shouldn't glusterfs-selinux a dependency ?
Best Regards, Strahil Nikolov
В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar ssivakum@redhat.com написа:
Hi Strahil,
Selinux policies and rules have to be added for gluster processes to work as intended when selinux is in enforced mode. Could you confirm if you've installed the glusterfs-selinux package in the nodes ? If not then you can check out the repo at https://github.com/gluster/glusterfs-selinux.
Regards, Srijan
On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov hunter86_bg@yahoo.com wrote:
Did anyone receive that e-mail ? Any hints ?
Best Regards, Strahil Nikolov
В 19:05 +0000 на 30.12.2020 (ср), Strahil Nikolov написа:
Hello All,
I have been testing Geo Replication on Gluster v 8.3 ontop CentOS 8.3. It seems that everything works untill SELINUX is added to the equasion.
So far I have identified several issues on the Master Volume's nodes:
- /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context than
the target that it is pointing to. For details check https://bugzilla.redhat.com/show_bug.cgi?id=1911133
- SELINUX prevents /usr/bin/ssh from search access to
/var/lib/glusterd/geo-replication/secret.pem
SELinux is preventing /usr/bin/ssh from search access to .ssh
SELinux is preventing /usr/bin/ssh from search access to
/tmp/gsyncd-aux-ssh-tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
Note: Using 'semanage fcontext' doesn't work due to the fact that files created are inheriting the SELINUX context of the parent dir and you need to restorecon after every file creation by the geo- replication process.
- SELinux is preventing /usr/bin/rsync from search access on
.gfid/00000000-0000-0000-0000-000000000001
Obviously, those needs fixing before anyone is able to use Geo- Replication with SELINUX enabled on the "master" volume nodes.
Should I open a bugzilla at bugzilla.redhat.com for the selinux policy?
Further details: [root@glustera ~]# cat /etc/centos-release CentOS Linux release 8.3.2011
[root@glustera ~]# rpm -qa | grep selinux | sort libselinux-2.9-4.el8_3.x86_64 libselinux-utils-2.9-4.el8_3.x86_64 python3-libselinux-2.9-4.el8_3.x86_64 rpm-plugin-selinux-4.14.3-4.el8.x86_64 selinux-policy-3.14.3-54.el8.noarch selinux-policy-devel-3.14.3-54.el8.noarch selinux-policy-doc-3.14.3-54.el8.noarch selinux-policy-targeted-3.14.3-54.el8.noarch
[root@glustera ~]# rpm -qa | grep gluster | sort centos-release-gluster8-1.0-1.el8.noarch glusterfs-8.3-1.el8.x86_64 glusterfs-cli-8.3-1.el8.x86_64 glusterfs-client-xlators-8.3-1.el8.x86_64 glusterfs-fuse-8.3-1.el8.x86_64 glusterfs-geo-replication-8.3-1.el8.x86_64 glusterfs-server-8.3-1.el8.x86_64 libglusterd0-8.3-1.el8.x86_64 libglusterfs0-8.3-1.el8.x86_64 python3-gluster-8.3-1.el8.x86_64
[root@glustera ~]# gluster volume info primary Volume Name: primary Type: Distributed-Replicate Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7 Status: Started Snapshot Count: 0 Number of Bricks: 5 x 3 = 15 Transport-type: tcp Bricks: Brick1: glustera:/bricks/brick-a1/brick Brick2: glusterb:/bricks/brick-b1/brick Brick3: glusterc:/bricks/brick-c1/brick Brick4: glustera:/bricks/brick-a2/brick Brick5: glusterb:/bricks/brick-b2/brick Brick6: glusterc:/bricks/brick-c2/brick Brick7: glustera:/bricks/brick-a3/brick Brick8: glusterb:/bricks/brick-b3/brick Brick9: glusterc:/bricks/brick-c3/brick Brick10: glustera:/bricks/brick-a4/brick Brick11: glusterb:/bricks/brick-b4/brick Brick12: glusterc:/bricks/brick-c4/brick Brick13: glustera:/bricks/brick-a5/brick Brick14: glusterb:/bricks/brick-b5/brick Brick15: glusterc:/bricks/brick-c5/brick Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.enable-shared-storage: enable
I'm attaching the audit log and sealert analysis from glustera (one of the 3 nodes consisting of the 'master' volume).
Best Regards, Strahil Nikolov
Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
_______________________________________________ CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
Hi Srijan,
I have compiled the rpm from the git repo and zeroed-out the lab and redeployed with that package:
[root@glustera ~]# rpm -qa | grep glusterfs-selinux glusterfs-selinux-0.1.0-2.el8.noarch [root@glustera ~]# rpm -ql glusterfs-selinux /usr/share/selinux/devel/include/contrib/ipp-glusterd.if /usr/share/selinux/packages/targeted/glusterd.pp.bz2 /var/lib/selinux/targeted/active/modules/200/glusterd
Yet , the result is very bad. I had to set SELINUX in permissive on all source nodes to establish a successfull geo-rep and all 7 denials have occured again on the source volume nodes:
[root@glustera ~]# sealert -a /var/log/audit/audit.log 100% done found 7 alerts in /var/log/audit/audit.log --------------------------------------------------------------------- -----------
Are you sure that glusterfs-selinux is suitable for EL8 and glusterfs v8.3 ?
Best Regards, Strahil Nikolov
В 19:47 +0000 на 06.01.2021 (ср), Strahil Nikolov via CentOS-devel написа:
Hi Srijan,
I just checked the gluster repo 'centos-gluster8' and it seems that there is no gluster package containing 'selinux' in the name . Same is valid for CentOS 7. Maybe I have to address that to the CentOS Storage SIG ?
Best Regards, Strahil Nikolov
В сряда, 6 януари 2021 г., 10:44:37 Гринуич+2, Strahil Nikolov via CentOS-devel centos-devel@centos.org написа:
Hi Srijan,
I will redeploy the scenario and I will check if the steps include that package. Shouldn't glusterfs-selinux a dependency ?
Best Regards, Strahil Nikolov
В сряда, 6 януари 2021 г., 07:29:27 Гринуич+2, Srijan Sivakumar < ssivakum@redhat.com> написа:
Hi Strahil,
Selinux policies and rules have to be added for gluster processes to work as intended when selinux is in enforced mode. Could you confirm if you've installed the glusterfs-selinux package in the nodes ? If not then you can check out the repo at https://github.com/gluster/glusterfs-selinux.
Regards, Srijan
On Wed, Jan 6, 2021 at 2:15 AM Strahil Nikolov <hunter86_bg@yahoo.com
wrote: Did anyone receive that e-mail ? Any hints ?
Best Regards, Strahil Nikolov
В 19:05 +0000 на 30.12.2020 (ср), Strahil Nikolov написа:
Hello All,
I have been testing Geo Replication on Gluster v 8.3 ontop CentOS 8.3. It seems that everything works untill SELINUX is added to the equasion.
So far I have identified several issues on the Master Volume's nodes:
- /usr/lib/ld-linux-x86-64.so.2 has a different SELINUX Context
than the target that it is pointing to. For details check https://bugzilla.redhat.com/show_bug.cgi?id=1911133
- SELINUX prevents /usr/bin/ssh from search access to
/var/lib/glusterd/geo-replication/secret.pem
SELinux is preventing /usr/bin/ssh from search access to .ssh
SELinux is preventing /usr/bin/ssh from search access to
/tmp/gsyncd-aux-ssh- tnwpw5tx/274d5d142b02f84644d658beaf86edae.sock
Note: Using 'semanage fcontext' doesn't work due to the fact that files created are inheriting the SELINUX context of the parent dir and you need to restorecon after every file creation by the geo- replication process.
- SELinux is preventing /usr/bin/rsync from search access on
.gfid/00000000-0000-0000-0000-000000000001
Obviously, those needs fixing before anyone is able to use Geo- Replication with SELINUX enabled on the "master" volume nodes.
Should I open a bugzilla at bugzilla.redhat.com for the selinux policy?
Further details: [root@glustera ~]# cat /etc/centos-release CentOS Linux release 8.3.2011
[root@glustera ~]# rpm -qa | grep selinux | sort libselinux-2.9-4.el8_3.x86_64 libselinux-utils-2.9-4.el8_3.x86_64 python3-libselinux-2.9-4.el8_3.x86_64 rpm-plugin-selinux-4.14.3-4.el8.x86_64 selinux-policy-3.14.3-54.el8.noarch selinux-policy-devel-3.14.3-54.el8.noarch selinux-policy-doc-3.14.3-54.el8.noarch selinux-policy-targeted-3.14.3-54.el8.noarch
[root@glustera ~]# rpm -qa | grep gluster | sort centos-release-gluster8-1.0-1.el8.noarch glusterfs-8.3-1.el8.x86_64 glusterfs-cli-8.3-1.el8.x86_64 glusterfs-client-xlators-8.3-1.el8.x86_64 glusterfs-fuse-8.3-1.el8.x86_64 glusterfs-geo-replication-8.3-1.el8.x86_64 glusterfs-server-8.3-1.el8.x86_64 libglusterd0-8.3-1.el8.x86_64 libglusterfs0-8.3-1.el8.x86_64 python3-gluster-8.3-1.el8.x86_64
[root@glustera ~]# gluster volume info primary
Volume Name: primary Type: Distributed-Replicate Volume ID: 89903ca4-9817-4c6f-99de-5fb3e6fd10e7 Status: Started Snapshot Count: 0 Number of Bricks: 5 x 3 = 15 Transport-type: tcp Bricks: Brick1: glustera:/bricks/brick-a1/brick Brick2: glusterb:/bricks/brick-b1/brick Brick3: glusterc:/bricks/brick-c1/brick Brick4: glustera:/bricks/brick-a2/brick Brick5: glusterb:/bricks/brick-b2/brick Brick6: glusterc:/bricks/brick-c2/brick Brick7: glustera:/bricks/brick-a3/brick Brick8: glusterb:/bricks/brick-b3/brick Brick9: glusterc:/bricks/brick-c3/brick Brick10: glustera:/bricks/brick-a4/brick Brick11: glusterb:/bricks/brick-b4/brick Brick12: glusterc:/bricks/brick-c4/brick Brick13: glustera:/bricks/brick-a5/brick Brick14: glusterb:/bricks/brick-b5/brick Brick15: glusterc:/bricks/brick-c5/brick Options Reconfigured: changelog.changelog: on geo-replication.ignore-pid-check: on geo-replication.indexing: on storage.fips-mode-rchecksum: on transport.address-family: inet nfs.disable: on performance.client-io-threads: off cluster.enable-shared-storage: enable
I'm attaching the audit log and sealert analysis from glustera (one of the 3 nodes consisting of the 'master' volume).
Best Regards, Strahil Nikolov
Community Meeting Calendar: Schedule - Every 2nd and 4th Tuesday at 14:30 IST / 09:00 UTC Bridge: https://meet.google.com/cpu-eiue-hvk
Gluster-devel mailing list Gluster-devel@gluster.org https://lists.gluster.org/mailman/listinfo/gluster-devel
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel
CentOS-devel mailing list CentOS-devel@centos.org https://lists.centos.org/mailman/listinfo/centos-devel