[CentOS] new "large" fileserver config questions

Thu Oct 4 12:05:30 UTC 2012
Rafa Griman <rafagriman at gmail.com>

Hi :)

On Wed, Oct 3, 2012 at 8:01 PM, Keith Keller
<kkeller at wombat.san-francisco.ca.us> wrote:
> On 2012-10-03, Rafa Griman <rafagriman at gmail.com> wrote:
>>
>> If it works with you ... I mean, there's no perfect partition scheme
>> (IMHO), depends greatly on what you do, your budget, workflow, file
>> size, ... So if you're happy with this, go ahead. Just some advice:
>> test a couple of different options first just in case ;)
>
> Well, given the warnings about SSD endurance, I didn't want to do
> excessive testing and contribute to faster wear.  But I've been reading
> around, and perhaps I'm just overreacting.  For example:
>
> http://www.storagesearch.com/ssdmyths-endurance.html


As all new technologies ... starting off is complicated. SSD vendors
have developed new "strategies" (leverage existing technology like
duplicating the real amount on the SSD) and new algorithms so they're
working on it ;)


> This article talks about RAID1 potentially being better for increasing
> SSD lifetime, despite the full write that mdadm will want to do.
>
> So.  For now, let's just pretend that these disks are not SSDs, but
> regular magnetic disks.  Do people have preferences for either of the
> methods for creating a bootable RAID1 I mentioned in my OP?  I like the
> idea of using a partitionable RAID, but the instructions seem
> cumbersome.  The anaconda method is straightforward, but simply creates
> RAID1 partitions, AFAICT, which is fine till a disk needs to be replaced,
> then gets slightly annoying.
>
>> Yup, even though you've got the sw and su options in case you want to
>> play around ... With XFS, you shouldn't have to use su and sw ... in
>> fact you shouldn't have to use many options since it tries to
>> autodetect and use the best options. Check the XFS FAQ.
>
> Well, I'm also on the XFS list, and there are varying opinions on this.
> >From what I can tell most XFS experts suggest just as you do--don't
> second-guess mkfs.xfs, and let it do what it thinks is best.  That's
> certainly what I've done in the past.  But there's a vocal group of
> posters who think this is incredibly foolish, and strongly suggest
> determing these numbers on your own.  If there were a straightforward
> way to do this with standard CentOS tools (well, plus tw_cli if needed)
> then I could try both methods and see which worked better.  John Doe
> suggested a guideline which I may try out.  But my gut instinct is that
> I shouldn't try to second-guess mkfs.xfs.


As always, if you know what you're doing ... feel free to define the
parameters/options ;) Oh, and if you've got the time to test different
options/values ;) If you know how your app writes/reads to disk, how
the RAID cache works, ... you can probably define better
options/values ... but that takes a lot of time, testing and reading.
XFS' default options might be a bit more conservative, but at least
you know they "work".

You have probably seen some XFS list member get "scolded" for messing
around with AG (or other options) and then saying performance has
dropped. I don't usually mess around with the options and just let
mkfs decide ... after all, the XFS devs spend more time benchmarking,
reading and testing than me ;)

I've been using XFS for a long time and I'm very happy with how it
works out of the box (YMMV).


>> Nope, just mass extinction of the Human Race. Nothing to worry about.
>
> So, it's a win-win?  ;-)


Definetly :D

   Rafa