W dniu 11.08.2022 o 17:34, Stephen Smoogen pisze:
On Thu, 11 Aug 2022 at 10:56, Marcin Juszkiewicz wrote:
As part of OpenStack deployments we deploy RabbitMQ. During current cycle I looked at moving from CentOS Stream 8 to 9. And RabbitMQ is a problem. When I boot CentOS Stream 9 system and then use just built 'rabbitmq' container memory use of "/usr/lib64/erlang/erts-12.3.2.2/bin/beam.smp" process goes up to 1.6GB ram:
CS9 on CS9
(rabbitmq)[root@kolla-cs9 /]# rabbitmq-diagnostics memory_breakdown Reporting memory breakdown on node rabbit@kolla-cs9... other_system: 1.6233 gb (68.59%) allocated_unused: 0.5164 gb (21.82%)
CS9 on Debian (versions?)
Debian 11 'bullseye' with up-to-date packages.
If I boot the same container on Debian host then same process uses 0.2GB ram: (rabbitmq)[root@debian /]# rabbitmq-diagnostics memory_breakdown Reporting memory breakdown on node rabbit@debian... binary: 0.2787 gb (70.2%) code: 0.0355 gb (8.93%) other_system: 0.0255 gb (6.44%)
Debian on CS9
Stats needed for better than
(rabbitmq)[root@kolla-cs9 /]# rabbitmq-diagnostics memory_breakdown Reporting memory breakdown on node rabbit@kolla-cs9... other_system: 1.6231 gb (73.42%) allocated_unused: 0.512 gb (23.16%) code: 0.0355 gb (1.61%) other_proc: 0.0189 gb (0.85%) binary: 0.0133 gb (0.6%) other_ets: 0.0034 gb (0.15%) plugins: 0.0015 gb (0.07%) atom: 0.0014 gb (0.06%) mgmt_db: 4.0e-4 gb (0.02%) connection_other: 4.0e-4 gb (0.02%) metrics: 3.0e-4 gb (0.01%) connection_readers: 2.0e-4 gb (0.01%) queue_procs: 1.0e-4 gb (0.01%) mnesia: 1.0e-4 gb (0.0%) connection_channels: 0.0 gb (0.0%) msg_index: 0.0 gb (0.0%) quorum_ets: 0.0 gb (0.0%) connection_writers: 0.0 gb (0.0%) stream_queue_procs: 0.0 gb (0.0%) stream_queue_replica_reader_procs: 0.0 gb (0.0%) queue_slave_procs: 0.0 gb (0.0%) quorum_queue_procs: 0.0 gb (0.0%) stream_queue_coordinator_procs: 0.0 gb (0.0%) reserved_unallocated: 0.0 gb (0.0%)
Erlang 1:24.2.1+dfsg-1~bpo11+1 RabbitMQ 3.9.22-1
Booted CS9 system and deployed OpenStack using Debian based containers. Again 1.6GB memory use.
So let build CS9 based containers using Erlang/RabbitMQ from CentOS Stream 9 "messaging/rabbitmq-38" repository. Again 1.6GB memory use.
(rabbitmq)[root@kolla-cs9 /]# rpm -qa|egrep "(rabbit|erlang-2)" erlang-24.3.4.2-1.el9s.x86_64 rabbitmq-server-3.9.21-1.el9s.x86_64
(rabbitmq)[root@kolla-cs9 /]# rabbitmq-diagnostics memory_breakdown Reporting memory breakdown on node rabbit@kolla-cs9... other_system: 1.6231 gb (73.98%) allocated_unused: 0.5107 gb (23.28%) code: 0.0356 gb (1.62%) other_proc: 0.018 gb (0.82%) other_ets: 0.0034 gb (0.16%) atom: 0.0014 gb (0.06%) plugins: 9.0e-4 gb (0.04%) mgmt_db: 4.0e-4 gb (0.02%) metrics: 2.0e-4 gb (0.01%) binary: 2.0e-4 gb (0.01%) mnesia: 1.0e-4 gb (0.0%) connection_other: 0.0 gb (0.0%) msg_index: 0.0 gb (0.0%) quorum_ets: 0.0 gb (0.0%) stream_queue_procs: 0.0 gb (0.0%) stream_queue_replica_reader_procs: 0.0 gb (0.0%) connection_readers: 0.0 gb (0.0%) connection_writers: 0.0 gb (0.0%) connection_channels: 0.0 gb (0.0%) queue_procs: 0.0 gb (0.0%) queue_slave_procs: 0.0 gb (0.0%) quorum_queue_procs: 0.0 gb (0.0%) stream_queue_coordinator_procs: 0.0 gb (0.0%) reserved_unallocated: 0.0 gb (0.0%)
So if I cut and summarized your data correctly:
CS9 container of rabbitmq on CS9 base system: Other_system bloats to 1.6 GB CS9 container of rabbitmq on Debian base system: Other system stays at 0.025 GB Debian container of rabbitmq on CS9 base system: Something bloats to 1.6GB Built-from-scratch container of rabbitmq on CS9 base system: Something bloats to 1.6Gb
It is "other_system" in all 4 causes.
I think we would want to make sure that the 'Something' in those is also Other system, and then look to see what is 'Other system' and what controls it. After that it would be to see what is 'controlling' that.
How was this container run? podman, docker, something else?
Docker in all situations. We did not migrated to Podman yet.
How was this system configured?
Host OS was minimal install in both (CS9 host, Debian host) cases.
Base install of packages?
Only what is needed to connect over SSH and manage (from outside) using Ansible.
Later added tmux, vim, mc, htop when I started checking what is going on.