[CentOS] anaconda, kickstart, lvm over raid, logvol --grow, centos7 mystery

Wed Aug 13 23:00:30 UTC 2014
Maxim Shpakov <maxim at osetia.org>

Just want to mention that this behaviour is already known bug

https://bugzilla.redhat.com/show_bug.cgi?id=1093144#c7

2014-07-31 12:01 GMT+03:00 Maxim Shpakov <maxim at osetia.org>:
> Hi!
>
> I can confirm this.
>
> --grow on LVM partition is broken for raid+lvm kickstart installs.
>
>
> bootloader --location=mbr --driveorder=sda,sdb --append="net.ifnames=0
> crashkernel=auto rhgb quiet"
> zerombr
> clearpart --all --drives=sda,sdb --initlabel
>
> part raid.1 --asprimary --size=200 --ondisk=sda
> part raid.2 --size=1 --grow --ondisk=sda
> part raid.3 --asprimary --size=200 --ondisk=sdb
> part raid.4 --size=1 --grow --ondisk=sdb
>
> raid /boot --fstype=ext4 --level=RAID1 --device=md0 raid.1 raid.3
> raid pv.1 --level=RAID1 --device=md1 raid.2 raid.4
>
> volgroup vg0 --pesize=65536 pv.1
>
> logvol swap --name=swap --vgname=vg0 --size=4096
> logvol /tmp --fstype=ext4 --name=tmp --vgname=vg0 --size=4096
> --fsoptions="noexec,nosuid,nodev,noatime"
> logvol / --fstype=ext4 --name=root --vgname=vg0 --size=10240 --grow
> --fsoptions="defaults,noatime"
>
> Such partitioning scheme is now working. Anaconda is complaining about
> "ValueError: not enough free space in volume group"
>
> Buf if I remove --grow  from last logvol - everything is ok.
>
> I don't understand what I'm doing wrong, such kickstart works
> flawlessly for C6 installs.
>
> 2014-07-16 14:21 GMT+03:00 Borislav Andric <borislav.andric at gmail.com>:
>> I am testing some kickstarts on ESXi virtual machine with pair of 16GB disks.
>> Partitioning is lvm over raid.
>>
>> If i am using "logvol --grow i get  "ValueError: not enough free space in volume group"
>> Only workaround i can find is to add --maxsize=XXX where XXX is at least 640MB less than available.
>> (10 extents or 320Mb per created logical volume)
>>
>> Following snippet is failing with "DEBUG blivet: failed to set size: 640MB short"
>>
>>         part raid.01 --size 512 --asprimary --ondrive=sda
>>         part raid.02 --size   1 --asprimary --ondrive=sda --grow
>>         part raid.11 --size 512 --asprimary --ondrive=sdb
>>         part raid.12 --size   1 --asprimary --ondrive=sdb --grow
>>         raid /boot   --fstype="xfs"   --device="md0" --level=RAID1 raid.01 raid.11
>>         raid pv.01   --fstype="lvmpv" --device="md1" --level=RAID1 raid.02 raid.12
>>         volgroup vg0 pv.01
>>         logvol /     --fstype="xfs" --grow --size=4096 --name=lvRoot --vgname=vg0
>>         logvol swap  --fstype="swap"       --size=2048 --name=lvSwap --vgname=vg0
>>
>> If i only add --maxsize=13164 everything is working.
>> (but after install i have 640MB in 20 Free PE in vg0, for details see "after --maxsize install")
>>
>>         logvol /     --fstype="xfs" --grow --size=4096 --name=lvRoot --vgname=vg0
>>         ------changed to ----->
>>         logvol /     --fstype="xfs" --grow --size=4096 --name=lvRoot --vgname=vg0 --maxsize=13164
>>
>>
>> Some interesting DEBUG lines :
>>
>>> 15840MB lvmvg vg0 (26)
>>> vg0 size is 15840MB
>>> Adding vg0-lvRoot/4096MB to vg0
>>> vg vg0 has 11424MB free
>>
>> should it be 11744 or there is 320MB overhead ?
>>
>>> Adding vg0-lvSwap/2048MB to vg0
>>> vg vg0 has 9056MB free
>>
>> 320MB missing again, total of 640MB
>>
>>> vg vg0: 9056MB free ; lvs: ['lvRoot', 'lvSwap']
>>
>> nice, i have 9056MB free in vg0 (640MB short but still ... )
>>
>>>  1 requests and 303 (9696MB) left in chunk
>>> adding 303 (9696MB) to 27 (vg0-lvRoot)
>>
>> wtf, who is counting what !!
>>
>>> failed to set size: 640MB short
>>
>>
>> Could anyone shed some light ?
>>
>>
>>
>>
>>
>> P.S.
>>
>> "after --maxsize install"
>> =========================
>> If i limit root logvol with --maxsize=13164, after installation i get 640MB of free space (20 Free PE).
>>
>>
>> Missing 640Mb is free according to lvm :
>>         [root at c7-pxe-install ~]# pvdisplay
>>           --- Physical volume ---
>>           PV Name               /dev/md1
>>           VG Name               vg0
>>           PV Size               15.49 GiB / not usable 22.88 MiB
>>           Allocatable           yes
>>           PE Size               32.00 MiB
>>           Total PE              495
>>>>>>>>>>>>Free PE               20<<<<<<<<<<<<
>>           Allocated PE          475
>>           PV UUID               uBLBqQ-Tpao-yPVj-1FVA-488x-Bs0K-ebQOmI
>>
>>
>> And i can use it :
>>         [root at c7-pxe-install ~]# lvextend -L +640M vg0/lvRoot
>>           Extending logical volume lvRoot to 13.47 GiB
>>           Logical volume lvRoot successfully resized
>>
>>         [root at c7-pxe-install ~]# xfs_growfs /
>>         meta-data=/dev/mapper/vg0-lvRoot isize=256    agcount=4, agsize=841728 blks
>>                          =                       sectsz=512   attr=2, projid32bit=1
>>                          =                       crc=0
>>         data     =                       bsize=4096   blocks=3366912, imaxpct=25
>>                          =                       sunit=0      swidth=0 blks
>>         naming   =version 2              bsize=4096   ascii-ci=0 ftype=0
>>         log      =internal               bsize=4096   blocks=2560, version=2
>>                          =                       sectsz=512   sunit=0 blks, lazy-count=1
>>         realtime =none                   extsz=4096   blocks=0, rtextents=0
>>         data blocks changed from 3366912 to 3530752
>>         [root at c7-pxe-install ~]# pvdisplay
>>           --- Physical volume ---
>>           PV Name               /dev/md1
>>           VG Name               vg0
>>           PV Size               15.49 GiB / not usable 22.88 MiB
>>           Allocatable           yes (but full)
>>           PE Size               32.00 MiB
>>           Total PE              495
>>           Free PE               0
>>           Allocated PE          495
>>           PV UUID               uBLBqQ-Tpao-yPVj-1FVA-488x-Bs0K-ebQOmI
>>
>>         [root at c7-pxe-install ~]#
>> _______________________________________________
>> CentOS mailing list
>> CentOS at centos.org
>> http://lists.centos.org/mailman/listinfo/centos