[CentOS] Btrfs going forward, was: Errors on an SSD drive

Fri Aug 11 18:56:42 UTC 2017
Warren Young <warren at etr-usa.com>

On Aug 11, 2017, at 12:39 PM, hw <hw at gc-24.de> wrote:
> 
> Warren Young wrote:
> 
>> [...]
>>>> What do they suggest as a replacement?
>> 
>> Stratis: https://stratis-storage.github.io/StratisSoftwareDesign.pdf
> 
> Can I use that now?

As I said, they’re targeting the first testable releases for Fedora 28.  Whether, how, and on what schedule Stratis gets into RHEL will be based on how well those tests go.

So, if you want Stratis in RHEL or CentOS and you want it to be awesome, you need to get involved with Fedora.  Wishes are not changes.

> How do you install on an XFS that is adjusted to the stripe size and the number of
> units when using hardware RAID?

That’s one of many reasons why you want to use software RAID if at all possible.  Software RAID makes many things easier than with hardware RAID.

Hardware RAID made the most sense when motherboards came with only 2 PATA ports and CPUs were single-core and calculated parity at double-digit percentages of the CPU’s capacity.  Over time, the advantages of hardware RAID have been eroded greatly.

> What if you want to use SSDs to install the system on?  That usually puts hardware
> RAID of the question.

SSD and other caching layers are explicitly part of the Stratis design, but won’t be in its first versions.

Go read the white paper.

> I don´t want the performance penalty MD brings about even as a home user.

You keep saying that.  [citation needed]

> Same goes for ZFS.

You want data checksumming and 3+ copies of metadata and copy-on-write and… but it all has to come for free?

> I can´t tell yet how the penalty looks with btrfs,
> only that I haven´t noticed any yet.

https://www.phoronix.com/scan.php?page=article&item=linux_317_4ssd&num=2

> And that brings back the question why nobody makes a hardware ZFS controller.

They do.  It’s called an Intel Xeon. :)

All RAID is software, at some level.

> Enterprise users would probably love that, provided that the performance issues
> could be resolved.

Enterprise users *do* have hardware ZFS appliances:

    https://www.ixsystems.com/truenas/
    https://nexenta.com/products/nexentastor

These are FreeBSD and Illumos-based ZFS storage appliances, respectively, with all the enterprisey features you could want, and they’re priced accordingly.

> Just try to copy a LV into another VG, especially when the VG resides on different devices.

ZFS send/receive makes that pretty easy.  You can even do it incrementally by coupling it with snapshots.  It’s fast enough that people use this to set up failover servers.

> Or try to make a snapshot in another VG
> because the devices the source of the snapshot resides on don´t have enough free
> space.

I don’t think any volume managing filesystem will fix that problem.  Not enough space is not enough space.

I think the best answer to that provided by ZFS is, “Why do you have more than one pool?”   Not a great answer, but it should at least make you re-justify your current storage design to yourself.