<div class="gmail_quote">On Wed, Apr 13, 2011 at 6:04 PM, Ross Walker <span dir="ltr"><<a href="mailto:rswwalker@gmail.com">rswwalker@gmail.com</a>></span> wrote:<br><blockquote class="gmail_quote" style="margin:0 0 0 .8ex;border-left:1px #ccc solid;padding-left:1ex;">
<div class="im">><br>
> One was a hardware raid over fibre channel, which silently corrupted<br>
> itself. System checked out fine, raid array checked out fine, xfs was<br>
> replaced with ext3, and the system ran without issue.<br>
><br>
> Second was multiple hardware arrays over linux md raid0, also over fibre<br>
> channel. This was not so silent corruption, as in xfs would detect it<br>
> and lock the filesystem into read-only before it, pardon the pun, truly<br>
> fscked itself. Happened two or three times, before we gave up, split up<br>
> the raid, and went ext3, Again, no issues.<br>
<br>
</div>Every now and then I hear these XFS horror stories. They seem too impossible to believe.<br>
<br>
Nothing breaks for absolutely no reason and failure to know where the breakage was shows that maybe there wasn't adequately skilled techinicians for the technology deployed.<br>
<br>
XFS if run in a properly configured environment will run flawlessly.<br><font class="Apple-style-span" color="#888888"><br></font></blockquote><div><br></div><div>That's not entirely true. Even in Centos 5.3(?), we ran into an issue of XFS running on an md array would lock up for seemingly no reason due to possible corruption. I've even bookmarked the relevant bug thread for posterity sake since it caused us so much grief. </div>
<div><br></div><div><a href="https://bugzilla.redhat.com/show_bug.cgi?id=512552">https://bugzilla.redhat.com/show_bug.cgi?id=512552</a></div><div><br></div><div><br></div></div>