On 8 September 2017 at 12:13, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Fri, September 8, 2017 11:07 am, Stephen John Smoogen wrote:
On 8 September 2017 at 11:00, Valeri Galtsev galtsev@kicp.uchicago.edu wrote:
On Fri, September 8, 2017 9:48 am, hw wrote:
m.roth@5-cent.us wrote:
hw wrote:
Mark Haney wrote:
<snip> >> BTRFS isn't going to impact I/O any more significantly than, say, >> XFS. > > But mdadm does, the impact is severe. I know there are ppl saying > otherwise, but I´ve seen the impact myself, and I definitely > don´t > want > it on that particular server because it would likely interfere with > other services. <snip> I haven't really been following this thread, but if your requirements are that heavy, you're past the point that you need to spring some money and buy hardware RAID cards, like LSI, er, Avago, I mean, who's bought them more recently?
Heavy requirements are not required for the impact of md-RAID to be noticeable.
Hardware RAID is already in place, but the SSDs are "extra" and, as I said, not suited to be used with hardware RAID.
Could someone, please, elaborate on the statement that "SSDs are not suitable for hardware RAID".
It will depend on the type of SSD and the type of hardware RAID. There are at least 4 different classes of SSD drives with different levels of cache, write/read performance, number of lifetime writes, etc. There are also multiple types of hardware RAID. A lot of hardware RAID will try to even out disk usage in different ways. This means 'moving' the heavily used data from slow parts to fast parts etc etc.
Wow, you learn something every day ;-) Which hardware RAIDs do these moving of data (manufacturer/model, please - believe it or not I never heard of that ;-). And "slow part" and "fast part" of what are data being moved between?
Thanks in advance for tutorial!
I thought it was HP who had these, but I can't find it.. which means without references... I get an F. My apologies on that. Thank you for keeping me honest.