[CentOS] cyrus spool on btrfs?

Fri Sep 8 13:49:02 UTC 2017
hw <hw at gc-24.de>

Mark Haney wrote:
> I hate top posting, but since you've got two items I want to comment on, I'll suck it up for now.

I do, too, yet sometimes it´s reasonable.  I also hate it when the lines
are too long :)

> Having SSDs alone will give you great performance regardless of filesystem.

It depends, i. e. I can´t tell how these SSDs would behave if large amounts of
data would be written and/or read to/from them over extended periods of time because
I haven´t tested that.  That isn´t the application, anyway.

> BTRFS isn't going to impact I/O any more significantly than, say, XFS.

But mdadm does, the impact is severe.  I know there are ppl saying otherwise,
but I´ve seen the impact myself, and I definitely don´t want it on that
particular server because it would likely interfere with other services.  I don´t
know if the software RAID of btrfs is better in that or not, though, but I´m
seeing btrfs on SSDs being fast, and testing with the particular application has
shown a speedup of factor 20--30.

That is the crucial improvement.  If the hardware RAID delivers that, I´ll use
that and probably remove the SSDs from the machine as it wouldn´t even make sense
to put temporary data onto them because that would involve software RAID.

> It does have serious stability/data integrity issues that XFS doesn't have.  There's no reason not to use SSDs for storage of immediate data and mechanical drives for archival data storage.
>
> As for VMs we run a huge Zimbra cluster in VMs on VPC with large primary SSD volumes and even larger (and slower) secondary volumes for archived mail.  It's all CentOS 6 and works very well.  We process 600 million emails a month on that virtual cluster.  All EXT4 inside LVM.

Do you use hardware RAID with SSDs?

> I can't tell you what to do, but it seems to me you're viewing your setup from a narrow SSD/BTRFS standpoint.  Lots of ways to skin that cat.

That´s because I do not store data on a single disk, without redundancy, and
the SSDs I have are not suitable for hardware RAID.  So what else is there but
either md-RAID or btrfs when I do not want to use ZFS?  I also do not want to
use md-RAID, hence only btrfs remains.  I also like to use sub-volumes, though
that isn´t a requirement (because I can use directories instead and loose the
ability to make snapshots).

I stay away from LVM because that just sucks.  It wouldn´t even have any advantage
in this case.


>
>
> On 09/08/2017 08:07 AM, hw wrote:
>>
>> PS:
>>
>> What kind of storage solutions do people use for cyrus mail spools?  Apparently
>> you can not use remote storage, at least not NFS.  That even makes it difficult
>> to use a VM due to limitations of available disk space.
>>
>> I´m reluctant to use btrfs, but there doesn´t seem to be any reasonable alternative.
>>
>>
>> hw wrote:
>>> Mark Haney wrote:
>>>> On 09/07/2017 01:57 PM, hw wrote:
>>>>>
>>>>> Hi,
>>>>>
>>>>> is there anything that speaks against putting a cyrus mail spool onto a
>>>>> btrfs subvolume?
>>>>>
>>>> I might be the lone voice on this, but I refuse to use btrfs for anything, much less a mail spool. I used it in production on DB and Web servers and fought corruption issues and scrubs hanging the system more times than I can count.  (This was within the last 24 months.)  I was told by certain mailing lists, that btrfs isn't considered production level.  So, I scrapped the lot, went to xfs and haven't had a problem since.
>>>>
>>>> I'm not sure why you'd want your mail spool on a filesystem and seems to hate being hammered with reads/writes. Personally, on all my mail spools, I use XFS or EXT4.  OUr servers here handle 600million messages a month without trouble on those filesystems.
>>>>
>>>> Just my $0.02.
>>>
>>> Btrfs appears rather useful because the disks are SSDs, because it
>>> allows me to create subvolumes and because it handles SSDs nicely.
>>> Unfortunately, the SSDs are not suited for hardware RAID.
>>>
>>> The only alternative I know is xfs or ext4 on mdadm and no subvolumes,
>>> and md RAID has severe performance penalties which I´m not willing to
>>> afford.
>>>
>>> Part of the data I plan to store on these SSDs greatly benefits from
>>> the low latency, making things about 20--30 times faster for an important
>>> application.
>>>
>>> So what should I do?
>
>