Valeri Galtsev wrote:
On Fri, September 8, 2017 9:48 am, hw wrote:
m.roth@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
<snip> >> BTRFS isn't going to impact I/O any more significantly than, say, XFS. > > But mdadm does, the impact is severe. I know there are ppl saying > otherwise, but I´ve seen the impact myself, and I definitely don´t > want > it on that particular server because it would likely interfere with > other services. <snip> I haven't really been following this thread, but if your requirements are that heavy, you're past the point that you need to spring some money and buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them more recently?
Heavy requirements are not required for the impact of md-RAID to be noticeable.
Hardware RAID is already in place, but the SSDs are "extra" and, as I said, not suited to be used with hardware RAID.
Could someone, please, elaborate on the statement that "SSDs are not suitable for hardware RAID".
When you search for it, you´ll find that besides wearing out undesirably fast --- which apparently can be contributed mostly to less overcommitment of the drive --- you may also experience degraded performance over time which can be worse than you would get with spinning disks, or at least not much better.
Add to that the firmware being designed for an entirely different application and having bugs, and your experiences with surprisingly incompatible hardware, and you can imagine that using an SSD not designed for hardware RAID applications with hardware RAID is a bad idea. There is a difference like night and day between "consumer hardware" and hardware you can actually use, and that is not only the price you pay for it.