[CentOS] iSCSI disk preperation

Mon Feb 7 20:38:41 UTC 2011
Ross Walker <rswwalker at gmail.com>

On Mon, Feb 7, 2011 at 3:32 PM, Jason Brown
<jason.brown at millbrookprinting.com> wrote:
> On 02/07/2011 02:36 PM, Dr. Ed Morbius wrote:
>> on 13:56 Mon 07 Feb, Jason Brown (jason.brown at millbrookprinting.com) wrote:
>>> I am currently going through the process of installing/configuring an
>>> iSCSI target and cannot find a good write up on how to prepare the disks
>>> on the server.  I would like to mirror the two disks and present them to
>>> the client.  Mirroring isn't the question, its how I go about it is the
>>> problem.  When I partitioned the two drives and mirrored them together,
>>> then presented them to the client, it showed to the client as a disk out
>>> no partion on it.  Should I partition the drive again and then lay the
>>> file system down on top of that?  Or should I delete the partitions on
>>> the target server and just have sda and sdb mirrored, then when the
>>> client attaches the disk, then partion it (/dev/sdc1) and write the file
>>> system.
>>
>> What are you using for your iSCSI target (storage array)?
>>
>> Generally, RAID management of the storage is managed on the target side.
>> You'd use the target's native abilities to create/manage RAID arrays to
>> configure one or more physical disks as desired.  If you're using a
>> dedicated vendor product, it should offer these capabilities through
>> some interface or another.
>>
>> Presentation of the iSCSI device is as a block storage device to the
>> initiator (host mounting the array).  That's going to be an
>> unpartitioned device.  You can either further partition this device or
>> use it as raw storage.  If you're partitioning it, and using multipath,
>> you'll have to muck with kpartx to make multipath aware of the
>> partitions.
>>
>> We've elected to skip this locally and create a filesystem on the iSCSI
>> device directly.
>>
>> Creating and mounting filesystems are both generally managed on the
>> initiator.
>>
>> Truth is, there's a lot of flexibility with iSCSI, but not a lot of
>> guidance as to best practices that I could find.  Vendor docs have
>> tended to be very poor.  Above is my recommendation, and should
>> generally work.  Alternate configurations are almost certainly possible,
>> and may be preferable.
>>
>
> I realize now that I left out quite a bit of information from my OP.
>
> The iSCSI target is going to be used to storage, which has 13 disks to
> be presented and would like these split up between two LUNs.  One which
> will be mirrored and the second LUN which will be RAID 5; each LUN going
> to different servers.  The iSCSI target will use CentOS 5.5 with
> software RAID and the initiators will be CentOS 5.5 as well.
>
> Here are the steps that I tried when I set this up:
> 1) Created partitions on sdh and sdi
> 2) RAID'ed sdh1 and sdi1
> 3) Mounted the disk on the client which then became sdc (this started
> the confusion as to why it wasn't sdc1)
> 4) Partitioned sdc to sdc1
> 5) Created an LV sdc1
> 6) Mounted everything fine and added the LV to fstab
> 7) Rebooted the server and SELinux freaked out which put the system into
> rescue mode
> 8) Removed the storage from fstab, did an '/.autorelabel' and rebooted
> 9) When the system came back up the lvm was not in /dev however I could
> see it in lvdisplay
>
> Since then I've tried looking for a best practice guide as to how do
> properly configure the disks but have not been able to find anything.

Here's what I would do.

Take the 13 disks, create one big raid set, RAID10 or RAID50, with a
hot spare in it.

Use LVM, make it a PV, and carve 2 LVs out of it, don't allocate all
the space up front, but enough to cover current usage and growth for 6
months or so. Then you can add space in large chunks to either from
the VG pool as it's needed, but do it in large 6 month chunks to
negate any fragmentation.

Export the LVs to the servers and voila! Your done.

-Ross