hi guys. I have two virtually identical hardware boxes which report qcow2 sizes differently:
-> $ du -xh /00-VMsy//enc.vdb.back1.proxmox.qcow2 2.9G /00-VMsy//enc.vdb.back1.proxmox.qcow2
-> $ du -xh /00-VMsy//enc.vdb.back1.proxmox.qcow2 71G /00-VMsy//enc.vdb.back1.proxmox.qcow2
I checked I though obvious places: ext4 header shows the same features, I also looked at NVMes which all are formated with the same block size, mountpoins mount the same way. But, I must be still missing something... trivial? I'm thinking - perhaps some kernel tunables - but if so, which ones I cannot find. 'ls' reports the same everywhere, virtual 71G size. (so with spares) All thoughts shared are much appreciated. thanks, L.
On Wed, Jun 25, 2025 at 7:04 AM lejeczek via Discuss < discuss@lists.centos.org> wrote:
hi guys. I have two virtually identical hardware boxes which report qcow2 sizes differently:
-> $ du -xh /00-VMsy//enc.vdb.back1.proxmox.qcow2 2.9G /00-VMsy//enc.vdb.back1.proxmox.qcow2
-> $ du -xh /00-VMsy//enc.vdb.back1.proxmox.qcow2 71G /00-VMsy//enc.vdb.back1.proxmox.qcow2
I checked I though obvious places: ext4 header shows the same features, I also looked at NVMes which all are formated with the same block size, mountpoins mount the same way. But, I must be still missing something... trivial? I'm thinking - perhaps some kernel tunables - but if so, which ones I cannot find. 'ls' reports the same everywhere, virtual 71G size. (so with spares) All thoughts shared are much appreciated. thanks, L.
This looks like a sparse file issue, something I hit often when creating a qcow, and then copy/sync it to another. Most likely the smaller file was the original, sparse file, and then it was copied with regular cp or rsync without --sparse.
I usually use 'rsync --sparse ...' even when copying locally. But I wouldn't be surprised if there was a better way of copying sparse files.
Troy
There is GlusterFS involved here in my setup. That qcow2, but all in general, was created on fuse-mounted gluster volume - how that fact plays into this, it should not at all, unless there are issues/bugs in Gluster. One thing I wonder about - when go loose with speculations - how mounting gluster volume might affect this, might result in such "misalignment" I'm experiencing glusterfs errors depending on how I mount gluster volume(replica) and like with du/df, one box/peer - in three peer gluster cluster - which is error-free, is the box which reports sizes with sparses counted in.