[CentOS] CentOS 7: software RAID 5 array with 4 disks and no spares?

Wed Feb 18 22:12:20 UTC 2015
Chris Murphy <lists at colorremedies.com>

On Wed, Feb 18, 2015 at 1:21 PM, Niki Kovacs <info at microlinux.fr> wrote:
> Le 18/02/2015 09:24, Michael Volz a écrit :
>>
>> md127 apparently only uses 81.95GB per disk. Maybe one of the partitions
>> has the wrong size. What's the output of lsblk?
>
>
> I just spent a few hours experimenting with the CentOS 7 installer in a
> VirtualBox guest with four virtual hard disks. I can now confirm this is a
> very stupid bug in the (very stupid) installer. Or at least one more random
> weirdness. Here goes.
>
> The new installer is organized around mount points, which have to be defined
> first. OK, so I first define my mountpoint /boot, set it to 200 MB (which is
> enough), define it to be RAID level 1 across four disks with an ext2
> filesystem. So far so good.
>
> Next step is similar, swap mountpoint is 2 GB, also RAID level 1 across four
> disks.
>
> Finally, the / (root partition) mountpoint is supposed to take up the full
> amount of remaining disk space. In my virtual guest, I defined 4 X 40 GB to
> fiddle with. The installer shows me something like 38.6 GB, which looks like
> the remaining space on each disk's partition.
>
> Now I define RAID level 5 across four disks...
>
> ... and here it comes.
>
> Once RAID level 5 is defined, I have to REDEFINE the maximum disk space by
> putting in a random large number, for example 4 X 40 GB = 160 GB. Because
> what is meant here is THE TOTAL RESULTING AMOUNT OF DISK SPACE IN THE RAID 5
> ARRAY, AND NOT THE MAXIMUM SIZE OF A DISK PARTITION. So once I fill that
> field with 160 GB, the installer "automagically" sets it to 106.8 GB, which
> is in effect the maximum available disk space using RAID 5.
>
> Usability anyone?

"installer is organized around mount points" is correct, and what gets
mounted on mount points? Volumes, not partitions. So it's consistent
with the UI that the size is a volume size, not a partition size.

The problem here, is that users are used to being involved in details
like making specific partitions in a specific order with specific
sizes. The new UI de-emphasizes the need to be involved in that level
of detail. It ends up making things more consistent regardless of
which "device type" you use: LVM, LVM thinp, standard, or Btrfs. If
you emphasize partitions, then you have to emphasize the user needing
to know esoteric things.

What is NOT obvious: for single device installs, if you omit the size
in the create mount point dialog, the size of the resulting volume
will consume all remaining space. But since there's no way to preset
raid5 at the time a mount point is created (raid5 is set after the
fact), there isn't a clear way to say "use all remaining space for
this". There's just a size field for the volume, and a space available
value in the lower left hand corner.

-- 
Chris Murphy