On 9/5/2014 07:18, Richard Zimmerman wrote:
Until I read this thread, I've never heard of building RAIDs on bare metal drives. I'm assuming no partition table, just a disk label?
I don't know what you mean by a disk label. BSD uses that term for their alternative to MBR and GPT partition tables, but I think you must mean something else.
In Linux terms, we're talking about /dev/sda, rather than /dev/sda1, for example.
What is the advantage of doing this?
The whole idea of a RAID is that you're going to take a bunch of member disks and combine them into a larger entity. On top of *that* you may wish to create partitions, LVMs, or whatever.
So the real question is, why do you believe you need to make each RAID member a *partition* on a disk, instead of just take over the entire disk? Unless you're going to do something insane like:
/dev/md0 /dev/sda1 /dev/sdb1 /dev/md1 /dev/sda2 /dev/sdb2
...you're not going to get any direct utility from composing a RAID from partitions on the RAID member drives.
(Why "insane?" Because now any I/O to /dev/md1 interferes with I/O to /dev/md0, because you only have two head assemblies, so you've wiped out the speed advantages you get from RAID-0 or -1.)
There are ancillary benefits, like the fact that a RAID element that spans the entire partition is inherently 4k-aligned. When there is a partition table taking space at the start of the first cylinder, you have to leave the rest of that cylinder unused in order to get back into 4k alignment.
The only downside I saw in this thread is that when you pull such a disk out of a Linux software RAID and put it into another machine, you don't see a clear Linux partition table, so you might think it is an empty drive. But the same thing is true of a hardware RAID member, too.