I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
I would like to benefit from the extra speed and features of a ext4 filesystem but I don't have any experience with it. Is there some member of the list who can enlighten me on whether ext4 is mature enough to be used on a production server without too much risk?
Thank you!
Miguel Medalha wrote:
I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
I would like to benefit from the extra speed and features of a ext4 filesystem but I don't have any experience with it. Is there some member of the list who can enlighten me on whether ext4 is mature enough to be used on a production server without too much risk?
Some people have encountered data loss issues on Ubuntu (quite some time back and nothing reported recently) and ext4 support is not yet official in Centos5/RHEL5.
thus Chan Chung Hang Christopher spake:
Miguel Medalha wrote:
I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
I would like to benefit from the extra speed and features of a ext4 filesystem but I don't have any experience with it. Is there some member of the list who can enlighten me on whether ext4 is mature enough to be used on a production server without too much risk?
Some people have encountered data loss issues on Ubuntu (quite some time back and nothing reported recently) and ext4 support is not yet official in Centos5/RHEL5.
Hi,
mentioned data loss issue was patched/a workaround applied [0], however, this is not a real problem here since RHEL/CentOS does not support ext4 officially yet.
For enterprise environments my favorite FS is XFS, YMMV, though.
HTH,
Timo
[0] -- http://www.h-online.com/open/news/item/Ext4-data-loss-explanations-and-worka...
For enterprise environments my favorite FS is XFS, YMMV, though.
I also thought about using xfs, but I don't like the idea of it being dependent of an external kernel module that is always lagging behind the current kernel version.
RedHat made the very questionable decision of NOT including the xfs module in the 32 bit flavor of RHEL 5.4...
thus Miguel Medalha spake:
For enterprise environments my favorite FS is XFS, YMMV, though.
I also thought about using xfs, but I don't like the idea of it being dependent of an external kernel module that is always lagging behind the current kernel version.
OTOH XFS never was 'invented here'...
RedHat made the very questionable decision of NOT including the xfs module in the 32 bit flavor of RHEL 5.4...
Well, XFS was developed on IRIX, so it was back then a pure 64bit FS...
Timo
On Sat, Dec 5, 2009 at 8:27 AM, Miguel Medalha miguelmedalha@sapo.pt wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I also thought about using xfs, but I don't like the idea of it being dependent of an external kernel module that is always lagging behind the current kernel version.
This is no longer true. The xfs kernel module offered by CentOS became kABI-compatible sometime ago -- meaning it survives kernel updates. And, as of CentOS 5.4, xfs is now enabled in the kernel, so no need for any external kernel module. But yes, this is available for x86_64 only.
Akemi
(...) The xfs kernel module offered by CentOS became kABI-compatible sometime ago -- meaning it survives kernel updates.
That is a clear improvement over the previous situation. I did suspect it but was not sure about it. Thank you for the information. I will do some tests with xfs, then.
And, as of CentOS 5.4, xfs is now enabled in the kernel, so no need for any external kernel module. But yes, this is available for x86_64 only
... a decision that many people have trouble at understanding!
On Sat, Dec 5, 2009 at 9:15 AM, Miguel Medalha miguelmedalha@sapo.pt wrote:
And, as of CentOS 5.4, xfs is now enabled in the kernel, so no need for any external kernel module. But yes, this is available for x86_64 only
... a decision that many people have trouble at understanding!
I asked Eric Sandeen of RH about the 64-bit only support. Here is what he said:
"xfs is targeted for big filesystems; 32-bit can't go over 16T anyway" "plus there's that 4k stack issue which can be a problem in some configurations"
Akemi
Miguel Medalha wrote:
(...) The xfs kernel module offered by CentOS became kABI-compatible sometime ago -- meaning it survives kernel updates.
That is a clear improvement over the previous situation. I did suspect it but was not sure about it. Thank you for the information. I will do some tests with xfs, then.
And, as of CentOS 5.4, xfs is now enabled in the kernel, so no need for any external kernel module. But yes, this is available for x86_64 only
... a decision that many people have trouble at understanding!
It seems like a logical choice, given that xfs tends to crash with the 4k stacks in the 32 bit kernel especially when layered with lvm/md/nfs.
On 05.12.2009 18:15, Miguel Medalha wrote:
And, as of CentOS 5.4, xfs is now enabled in the kernel, so no need for any external kernel module. But yes, this is available for x86_64 only
... a decision that many people have trouble at understanding!
XFS is not stable on 32-bit systems. You should not use it there. You need a 64-bit kernel.
Default for servers should be 64-bit now anyway. Not many reasons left for a 32-bit system, and more and more 3. party applications have less and less support for 32-bit platforms in general.
XFS is not stable on 32-bit systems. You should not use it there. You need a 64-bit kernel.
Default for servers should be 64-bit now anyway. Not many reasons left for a 32-bit system, and more and more 3. party applications have less and less support for 32-bit platforms in general.
That is for you rich people :-) Not everyone can afford the latest and greatest server hardware. There are tons of older servers out there. I still manage some servers with only 2GB of RAM and some of their motherboards accept a *maximum* of 4GB. Those precious few GB are better used with a 32bit OS, don't you agree?
Miguel Medalha wrote:
XFS is not stable on 32-bit systems. You should not use it there. You need a 64-bit kernel.
Default for servers should be 64-bit now anyway. Not many reasons left for a 32-bit system, and more and more 3. party applications have less and less support for 32-bit platforms in general.
That is for you rich people :-) Not everyone can afford the latest and greatest server hardware. There are tons of older servers out there. I still manage some servers with only 2GB of RAM and some of their motherboards accept a *maximum* of 4GB. Those precious few GB are better used with a 32bit OS, don't you agree?
If they do what you want without making you wait, why even consider changing the filesystem that has been working for years on these machines?
If they do what you want without making you wait, why even consider changing the filesystem that has been working for years on these machines?
Adding new, bigger disks and new filesystems? Wanting these to be the fastest that is reasonably possible? As for the system that arose the question (again) for me, I've decided to make it ext3, wait for a while as ext4 matures, and convert it later. This interesting possibility made me decide for ext3 (again).
Miguel Medalha wrote:
If they do what you want without making you wait, why even consider changing the filesystem that has been working for years on these machines?
Adding new, bigger disks and new filesystems? Wanting these to be the fastest that is reasonably possible? As for the system that arose the question (again) for me, I've decided to make it ext3, wait for a while as ext4 matures, and convert it later. This interesting possibility made me decide for ext3 (again).
The only thing that can make filesystems fast is buffering in RAM so you'd probably want to match up that increase in disk space with lots more RAM, especially if you use a filesystem that needs it for the improvements it claims.
On Thu, Dec 10, 2009 at 12:06 PM, Morten Torstensen morten@mortent.org wrote:
On 05.12.2009 18:15, Miguel Medalha wrote:
And, as of CentOS 5.4, xfs is now enabled in the kernel, so no need for any external kernel module. But yes, this is available for x86_64 only
... a decision that many people have trouble at understanding!
XFS is not stable on 32-bit systems. You should not use it there. You need a 64-bit kernel.
Default for servers should be 64-bit now anyway. Not many reasons left for a 32-bit system, and more and more 3. party applications have less and less support for 32-bit platforms in general.
Unless you're talking about desktop systems and things like flash...
and less support for 32-bit platforms in general.
Unless you're talking about desktop systems and things like flash...
(OT) The beta Flash plugin for 64 bits works really well, I use it for months on Fedora and CentOS: http://labs.adobe.com/downloads/flashplayer10_64bit.html cd /usr/lib64/mozilla/plugins sudo tar -xzf ~/Downloads/libflashplayer-10.0.32.18.linux-x86_64.so.tar.gz
Timo Schoeler wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
is B) no longer an issue?
I wanna know how come JFS/JFS2 (originally from IBM) isn't more popular in the linux world? At least as implemented in AIX, its rock stable, journaling, excellent performance, and handles both huge files and lots of tiny files without blinking. jfs2 handles really huge file systems, too. I really like how, in AIX, the VM and FS tools are coordinated, so expanding and reorganizing file systems is trivial, nearly as simple as Sun's ZFS.
[1] replace power failure with unexpected ups event if you prefer.
thus John R Pierce spake:
Timo Schoeler wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
Well, I used XFS both on IRIX (which I still use) as well as on GNU/Linux; I had not a single problem with it ever since. In a company I worked for (it was 2005, I think) we had to decide which FS we would use for our new, multi TiByte big EMC2 storage. After some weeks of testing, we ended up using XFS.
Searching around there are many benchmarks FS x vs. FS y vs...
is B) no longer an issue?
I wanna know how come JFS/JFS2 (originally from IBM) isn't more popular in the linux world? At least as implemented in AIX, its rock stable, journaling, excellent performance, and handles both huge files and lots of tiny files without blinking. jfs2 handles really huge file systems, too. I really like how, in AIX, the VM and FS tools are coordinated, so expanding and reorganizing file systems is trivial, nearly as simple as Sun's ZFS.
I use AIX on my personal workstation in the office (besides my CentOS machine); it's awesome.
However, one has to keep in mind that for both XFS as well as for JFS/JFS2 a few/some/many utilities are not ported over to the GNU/Linux world. Furthermore, LVM surely doesn't support all the nitty-gritty stuff that XFS, JFS etc. support...
[1] replace power failure with unexpected ups event if you prefer.
Timo
On Sat, 05 Dec 2009 10:48:56 -0800 John R Pierce pierce@hogranch.com wrote:
Timo Schoeler wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
is B) no longer an issue?
You get horror stories about anything, depending on which people you ask. For example, where reiserfs was supposed to eat data left and right some years ago, I had 6 data losing crashes on ext3 and 0 with reiserfs. On same machine, same disks, so same conditions. Go figure.
I wanna know how come JFS/JFS2 (originally from IBM) isn't more popular in the linux world? At least as implemented in AIX, its rock stable, journaling, excellent performance, and handles both huge files and lots of tiny files without blinking. jfs2 handles really huge file systems, too. I really like how, in AIX, the VM and FS tools are coordinated, so expanding and reorganizing file systems is trivial, nearly as simple as Sun's ZFS.
AFAIK AIX JFS != Linux JFS. It's more like OS/2 JFS and IBM ported it to linux to enable their os/2 customers to move to linux.
Also whenever fs reliability discussion pops up I like to point people to this paper: http://www.cs.wisc.edu/wind/Publications/iron-sosp05.pdf Tables on page 8 are most amusing. Also shows which filesystems were developed in an academic world and which were engineered in a real world ;)
Jure Pečar wrote:
AFAIK AIX JFS != Linux JFS. It's more like OS/2 JFS and IBM ported it to linux to enable their os/2 customers to move to linux.
that same OS/2 JFS was backported to AIX as JFS2, I believe.
Also whenever fs reliability discussion pops up I like to point people to this paper: http://www.cs.wisc.edu/wind/Publications/iron-sosp05.pdf
interesting, but that article is 5 years old. I'd be surprised if most of the implementation 'bugs' and anomalies discussed have not since been addressed.
I do wish more file systems and volume managers implemented block checksumming, which provides end to end integrity both for data and metadata. Afaik, only Sun's ZFS fully implements this approach.
On 05.12.2009 22:04, John R Pierce wrote:
that same OS/2 JFS was backported to AIX as JFS2, I believe.
When JFS was implemented on OS/2 it was based on JFS on AIX. After that, JFS for Linux and JFS2 was based on the same code. Not sure I would say "backported", but there you go....
There are many differences between JFS and JFS2 on AIX and the latter is better in many ways... more tuning and support for shrinking.
Jure Pečar wrote:
On Sat, 05 Dec 2009 10:48:56 -0800 John R Pierce pierce@hogranch.com wrote:
Timo Schoeler wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
is B) no longer an issue?
You get horror stories about anything, depending on which people you ask. For example, where reiserfs was supposed to eat data left and right some years ago, I had 6 data losing crashes on ext3 and 0 with reiserfs. On same machine, same disks, so same conditions. Go figure.
Prior to 2.4.18 reiserfs was not in sync with the then ever changing vfs layer hence the data losses. It became stable after 2.4.18.
John R Pierce wrote:
Timo Schoeler wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
Fixed with the introduction of barriers for stuff that use fsync (therefore xfs on a partition, not lvm since dm does not support barriers) but then one probably uses hw raid with big bbu caches for xfs....
is B) no longer an issue?
I wanna know how come JFS/JFS2 (originally from IBM) isn't more popular in the linux world? At least as implemented in AIX, its rock stable, journaling, excellent performance, and handles both huge files and lots of tiny files without blinking. jfs2 handles really huge file systems, too. I really like how, in AIX, the VM and FS tools are coordinated, so expanding and reorganizing file systems is trivial, nearly as simple as Sun's ZFS.
yeah, love jfs. Using that in Ubuntu land.
Chan Chung Hang Christopher wrote:
John R Pierce wrote:
Timo Schoeler wrote:
For enterprise environments my favorite FS is XFS, YMMV, though.
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
Fixed with the introduction of barriers for stuff that use fsync (therefore xfs on a partition, not lvm since dm does not support barriers) but then one probably uses hw raid with big bbu caches for xfs....
is B) no longer an issue?
I wanna know how come JFS/JFS2 (originally from IBM) isn't more popular in the linux world? At least as implemented in AIX, its rock stable, journaling, excellent performance, and handles both huge files and lots of tiny files without blinking. jfs2 handles really huge file systems, too. I really like how, in AIX, the VM and FS tools are coordinated, so expanding and reorganizing file systems is trivial, nearly as simple as Sun's ZFS.
yeah, love jfs. Using that in Ubuntu land.
Do any of these handle per-file fsync() in a reasonable way (i.e. not waiting to flush the entire filesystem buffer)?
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
FWIW, I was at SGI when XFS for Linux was released, and I probably was among its first users. It was great back then, but now it's over-rated.
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
FWIW, I was at SGI when XFS for Linux was released, and I probably was among its first users. It was great back then, but now it's over- rated.
-- Florin Andrei
http://florin.myip.org _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Ian Forde wrote:
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
The thing is that none of you ever stated how XFS was used. With hardware raid or software raid or lvm or memory disk...
Anyway, data loss issues today should come down to not setting up properly. Like disabling barriers on disks that have their write cache enabled.
thus Christopher Chan spake:
Ian Forde wrote:
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
The thing is that none of you ever stated how XFS was used. With hardware raid or software raid or lvm or memory disk...
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
Anyway, data loss issues today should come down to not setting up properly. Like disabling barriers on disks that have their write cache enabled.
That's exactly the point; maybe it is due to XFS coming from an enterprise-class OS (IRIX) to the open source community. On IRIX, there was a distinctive hardware platform on which IRIX and thusly XFS was run on. When XFS was ported to GNU/Linux, it not only had to deal with different LVM and RAID devices/mechanisms, but also with some hassles when being deployed on 32bit environments, for which it just wasn't designed.
So, to sum it up: IMHO it was surely in most cases not XFS's fault when data loss occured, but more due to errors that were made when being deployed (in GNU/Linux environments), be it 32bit issues, (missing) barriering or whatever.
It'd be interesting to see some statistics on XFS issues on IRIX vs GNU/Linux.
Regards,
Timo
Timo Schoeler wrote:
thus Christopher Chan spake:
Ian Forde wrote:
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
The thing is that none of you ever stated how XFS was used. With hardware raid or software raid or lvm or memory disk...
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
thus Chan Chung Hang Christopher spake:
Timo Schoeler wrote:
thus Christopher Chan spake:
Ian Forde wrote:
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
The thing is that none of you ever stated how XFS was used. With hardware raid or software raid or lvm or memory disk...
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now.
(Not company critical stuff -- just my 2nd workstation, the one to mess around with; however, I didn't have problems yet -- what, of course, should nobody invite do test it [on critical data]...!)
You mentioned some pretty hefty hardware in your other post...
Which do you mean?
Timo
Timo Schoeler wrote:
thus Chan Chung Hang Christopher spake:
Timo Schoeler wrote:
thus Christopher Chan spake:
Ian Forde wrote:
On Dec 7, 2009, at 10:30 AM, Florin Andrei florin@andrei.myip.org wrote:
John R Pierce wrote:
> I've always avoided XFS because A) it wsan't supported natively in > RHEL > anyways, and B) I've heard far too many stories about catastrophic > loss > problems and day long FSCK sessions after power failures [1] or what > have you > > I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Given that I stated my experience with XFS, and my rationale for using it in *my* production environment, I take exception to your calling said experience unjustified.
The thing is that none of you ever stated how XFS was used. With hardware raid or software raid or lvm or memory disk...
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now.
(Not company critical stuff -- just my 2nd workstation, the one to mess around with; however, I didn't have problems yet -- what, of course, should nobody invite do test it [on critical data]...!)
Oh, nevermind.
You mentioned some pretty hefty hardware in your other post...
Which do you mean?
EMC2 storage...
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
Mark Caudill wrote:
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
if LVM is ignoring write barriers, its not a good idea on hardware raid, either, at least not for applications that rely on committed writes, like transactional databases.
John R Pierce wrote:
Mark Caudill wrote:
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
if LVM is ignoring write barriers, its not a good idea on hardware raid, either, at least not for applications that rely on committed writes, like transactional databases.
Write barriers are for the case of getting a data guarantee with hard disks connected via sata/scsi that have their write caches enabled.
Hardware raid + bbu cache change that game. LVM on hardware raid is safe due to the bbu cache (with write caches on connected hard drives set to off)
Mark Caudill wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
Yes, the Linux kernel has long been criticized for a fake fsync/fsyncdata implementation. At the latest, since 2001. Unless you had your hard drive caches turned off, you were at risk of losing data no matter what you used: ext2, ext3, reiserfs, xfs, jfs, whether on lvm or not.
Write barriers were introduced to give data guarantees with hard drives that have their write cache enabled. Unfortunately, not everything has been given barrier support. LVM and JFS do not have write barrier support.
So it is use LVM but turn off write caches on disks (painfully slow) or do not use LVM and use a filesystem with write barrier support and enable write caches on disks.
Hardware raid with bbu caches were introduced to provide speed and data guarantees. The other option would be to use software raid, disable write caching, use a bbu nvram stick and use ext3 with data=journal.
On Fri, Dec 11, 2009 at 09:20:24AM +0800, Christopher Chan wrote:
Mark Caudill wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
Yes, the Linux kernel has long been criticized for a fake fsync/fsyncdata implementation. At the latest, since 2001. Unless you had your hard drive caches turned off, you were at risk of losing data no matter what you used: ext2, ext3, reiserfs, xfs, jfs, whether on lvm or not.
Write barriers were introduced to give data guarantees with hard drives that have their write cache enabled. Unfortunately, not everything has been given barrier support. LVM and JFS do not have write barrier support.
https://www.redhat.com/archives/dm-devel/2009-December/msg00079.html
"Barriers are now supported by all the types of dm devices."
-- Pasi
Write barriers were introduced to give data guarantees with hard drives that have their write cache enabled. Unfortunately, not everything has been given barrier support. LVM and JFS do not have write barrier support.
https://www.redhat.com/archives/dm-devel/2009-December/msg00079.html
"Barriers are now supported by all the types of dm devices."
Wunderbar!
Now if the IBM team will add barrier support to JFS...
On Dec 13, 2009, at 10:15 AM, Pasi Kärkkäinen pasik@iki.fi wrote:
On Fri, Dec 11, 2009 at 09:20:24AM +0800, Christopher Chan wrote:
Mark Caudill wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
> Speaking for me (on Linux systems) on top of LVM on top of md. > On IRIX > as it was intended. > That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
Yes, the Linux kernel has long been criticized for a fake fsync/fsyncdata implementation. At the latest, since 2001. Unless you had your hard drive caches turned off, you were at risk of losing data no matter what you used: ext2, ext3, reiserfs, xfs, jfs, whether on lvm or not.
Write barriers were introduced to give data guarantees with hard drives that have their write cache enabled. Unfortunately, not everything has been given barrier support. LVM and JFS do not have write barrier support.
https://www.redhat.com/archives/dm-devel/2009-December/msg00079.html
"Barriers are now supported by all the types of dm devices."
I wonder how long till it's backported to RHEL?
-Ross
On Dec 10, 2009, at 7:52 PM, Mark Caudill markca@codelulz.com wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
If you use a leading edge distro then they will most likely be using a LVM version with barrier support as it was implemented as of 2.6.29-2.6.30+.
It should be backported by the next release of CentOS hopefully.
-Ross
On that today perhaps those thinking of ext4 for production systems - especially shared multiuser systems - should check out CVE-2009-4131 ...
CVE-2009-4131: Arbitrary file overwrite in ext4
Insufficient permission checking in the ext4 filesytem could be exploited by local users to overwrite arbitrary files.
Ksplice update ID: mfm62pmh
2009/12/11 Ross Walker rswwalker@gmail.com
On Dec 10, 2009, at 7:52 PM, Mark Caudill markca@codelulz.com wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
Speaking for me (on Linux systems) on top of LVM on top of md. On IRIX as it was intended.
That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
If you use a leading edge distro then they will most likely be using a LVM version with barrier support as it was implemented as of 2.6.29-2.6.30+.
It should be backported by the next release of CentOS hopefully.
-Ross
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Best advisory link I've found:
http://www.vupen.com/english/advisories/2009/3468
2009/12/11 James Hogarth james.hogarth@gmail.com
On that today perhaps those thinking of ext4 for production systems - especially shared multiuser systems - should check out CVE-2009-4131 ...
CVE-2009-4131: Arbitrary file overwrite in ext4
Insufficient permission checking in the ext4 filesytem could be exploited by local users to overwrite arbitrary files.
Ksplice update ID: mfm62pmh
2009/12/11 Ross Walker rswwalker@gmail.com
On Dec 10, 2009, at 7:52 PM, Mark Caudill markca@codelulz.com wrote:
Christopher Chan wrote:
Morten Torstensen wrote:
On 08.12.2009 13:34, Chan Chung Hang Christopher wrote:
> Speaking for me (on Linux systems) on top of LVM on top of md. > On IRIX > as it was intended. > That is a disaster combination for XFS even now. You mentioned some pretty hefty hardware in your other post...
If XFS doesn't play well with LVM, how can it even be an option? I couldn't live without LVM...
I meant it in the sense of data guarantee. XFS has a major history of losing data unless used with hardware raid cards that have a bbu cache. That changed when XFS got barrier support.
However, anything on LVM be it ext3, ext4 or XFS that has barrier support will not be able to use barriers because device-mapper does not support barriers and therefore, if you use LVM, it better be on a hardware raid array where the card has bbu cache.
Wait, just to be clear, are you saying that all use of LVM is a bad idea unless on hardware RAID? That's bad it if it's true since it seems to me that most modern distros like to use LVM by default. Am I missing something?
If you use a leading edge distro then they will most likely be using a LVM version with barrier support as it was implemented as of 2.6.29-2.6.30+.
It should be backported by the next release of CentOS hopefully.
-Ross
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Centos 5.x, ext3, md raid1
As I do not have UPS for all machines, and I most often use md raid (level 1), I would like to turn the write cache off for all of my server discs. But how?
Below is what I have already found out:
On the page
http://lwn.net/Articles/350072/
I read that I could do this by running hdparm -W0 /dev/sdX , but when I say hdparm -h, it says that the -W option is “dangerous”.
This is the current hdparm output for one of my raid discs:
[root@mail etc]# hdparm /dev/sda /dev/sda: IO_support = 0 (default 16-bit) readonly = 0 (off) readahead = 256 (on) geometry = 38913/255/63, sectors = 625142448, start = 0
(BTW, here is a hdparm tutorial for Ubuntu:
http://ubuntuforums.org/showthread.php?t=16360 )
And when I get this done, then how can I make the setting last over the next bootup?
Regards, Jussi P.S. Do you really have to quote the whole thread when you respond?
On Fri, Dec 11, 2009 at 9:52 AM, Jussi Hirvi listmember@greenspot.fi wrote:
Centos 5.x, ext3, md raid1
As I do not have UPS for all machines, and I most often use md raid (level 1), I would like to turn the write cache off for all of my server discs. But how?
Below is what I have already found out:
On the page
http://lwn.net/Articles/350072/
I read that I could do this by running hdparm -W0 /dev/sdX , but when I say hdparm -h, it says that the -W option is “dangerous”.
Don't worry it is safe to use that option.
And when I get this done, then how can I make the setting last over the next bootup?
rc.local it.
P.S. Do you really have to quote the whole thread when you respond?
No just the relevant parts.
-Ross
On Fri, Dec 11, 2009 at 9:52 AM, Jussi Hirvi listmember@greenspot.fi wrote: P.S. Do you really have to quote the whole thread when you respond?
On 11.12.2009 20:28, Ross Walker wrote:
No just the relevant parts.
:-D
My point exactly. What I meant to say was "oh please, don't quote everything all the time".
- Jussi
Florin Andrei wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Everyone on this list is somewhat accustomed to ignoring reports of bugs that are known to be fixed in current versions. Is there some reason to think that the current XFS on 64-bit Linux is more fragile or less well tested than ext4?
On Monday 07 December 2009, Florin Andrei wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent)
I'm sorry for your losses. That said, we've run many servers (100+) using many CentOS versions over the years and I don't know of one case of XFS caused data loss. For us XFS has always performed well and "just worked".
Our initial reason for using XFS over EXT3 was write performance on certain RAID-controllers but lately it's also about scalability (file system size).
with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise),
I am looking forward to EXT4, but it is currently a tech. preview (compared to XFS "proven for many years")...
Just my €0.02, Peter
but reliability was never its strong point. The bias on this list is surprising and unjustified.
FWIW, I was at SGI when XFS for Linux was released, and I probably was among its first users. It was great back then, but now it's over-rated.
Florin Andrei wrote:
John R Pierce wrote:
I've always avoided XFS because A) it wsan't supported natively in RHEL anyways, and B) I've heard far too many stories about catastrophic loss problems and day long FSCK sessions after power failures [1] or what have you
I've both heard about and experienced first-hand data loss (pretty severe actually, some incidents pretty recent) with XFS after power failure. It used to be great for performance (not so great now that Ext4 is on the rise), but reliability was never its strong point. The bias on this list is surprising and unjustified.
Yes. Used XFS for a mail queue and once lost 4000 emails thanks to XFS's aggressive caching after a power loss before barriers were introduced. However, XFS now supports barriers and so, so long as you do not use lvm or you use hardware raid with a bbu cache and thus not needing to use barriers, you are safe.
FWIW, I was at SGI when XFS for Linux was released, and I probably was among its first users. It was great back then, but now it's over-rated.
For sure it is the most complicated filesystem in Linux with the largest block code.
On Sat, Dec 5, 2009 at 10:20 AM, Miguel Medalha miguelmedalha@sapo.pt wrote:
I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
I would like to benefit from the extra speed and features of a ext4 filesystem but I don't have any experience with it. Is there some member of the list who can enlighten me on whether ext4 is mature enough to be used on a production server without too much risk?
Thank you!
Regardless of the technical issues offered here, ask yourself this: Do you really want to be experimenting with a new file system on a production server with "pretty critical data"? Since you asked about "too much risk", I think you already answered the question.
Any sane process would involve installing it on a low priority test server, running for a while to see how it goes, and learning about new features or tools. After you've done that on a few lower priority servers, for maybe a year or so, then you might start to _think_ about using it on a production server like this.
My guess is that any additional speed can come from tuning other areas of your server and disk subsystem. What hardware do you have? What kind of disks? Using RAID? What level? Have you looked into aligning your partitions with the RAID blocks? I'm sure that some of the hardcore disk I/O people on the list can ask better questions and give more meaningful recommendations.
On Sat, 2009-12-05 at 22:47 -0500, Brian Mathis wrote:
On Sat, Dec 5, 2009 at 10:20 AM, Miguel Medalha miguelmedalha@sapo.pt wrote:
I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
I would like to benefit from the extra speed and features of a ext4 filesystem but I don't have any experience with it. Is there some member of the list who can enlighten me on whether ext4 is mature enough to be used on a production server without too much risk?
Thank you!
Regardless of the technical issues offered here, ask yourself this: Do you really want to be experimenting with a new file system on a production server with "pretty critical data"? Since you asked about "too much risk", I think you already answered the question.
Any sane process would involve installing it on a low priority test server, running for a while to see how it goes, and learning about new features or tools. After you've done that on a few lower priority servers, for maybe a year or so, then you might start to _think_ about using it on a production server like this.
My guess is that any additional speed can come from tuning other areas of your server and disk subsystem. What hardware do you have? What kind of disks? Using RAID? What level? Have you looked into aligning your partitions with the RAID blocks? I'm sure that some of the hardcore disk I/O people on the list can ask better questions and give more meaningful recommendations.
Funny that - that's the kind of answer I was hoping to see on this list. The key issue was the fact that it's a production server. As a data point, I've been using mythtv at home for about 6 years. (Has it really been that long? Wow!) During that time, I've been using XFS filesystems for media storage for about the last 4 or 5. I haven't had a problem with it yet, though that doesn't preclude the possibility of it occurring at some later date.
(Even, now that I've written this, it may fail several seconds from now, given that I may have jinxed it!)
Anyhoo - due to this experience with it for my data at home which is constantly been written and rewritten - (mythtv is pretty intensive on systems - run it for a few years and BELIEVE ME - you'll find out where the weak points in various OS components are...) I've found XFS safe enough to use at work on production database servers.
It works for me. It may not for you, but I'm happy so far.
Again - this may all change tomorrow, but YMMV, as there's no such thing as software liability, and open source may eat your cat, make your dog toss its cookies on your lap, and cause the universe to unspool itself in your Wheaties tomorrow. We all take our chances, and it's a matter of how much risk we're willing to shoulder. As I said, I went through my process and deemed it acceptable...
-I
Miguel Medalha wrote:
I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
Just for the record, Theodore Ts'o marked ext4 as stable and ready for general usage more than one year ago [1]. On 25 December 2008 kernel 2.6.28 was released with ext4 considered ready for production. So, ext4 is not _that_ new anymore. One year latter that Fedora 12 and Ubuntu 9.10 began using ext4 as default.
I believe for 5.5 or even on 5.6, ext4 will not be a tech preview anymore. Considering that RH has extended the support so much, and how ext3 is so limited with the current and future disk's capacities (fsck on a 1TB volume is not funny). The current ext4 module is close to the one on 2.6.29 plus lots of fixes [2]
[1] http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=... [2] rpm -q --changelog kernel|grep ext4
On 12/9/2009 12:23 PM, Miguel Di Ciurcio Filho wrote:
Miguel Medalha wrote:
I am about to install a new server running CentOS 5.4. The server will contain pretty critical data that we can't afford to corrupt.
Just for the record, Theodore Ts'o marked ext4 as stable and ready for general usage more than one year ago [1]. On 25 December 2008 kernel 2.6.28 was released with ext4 considered ready for production. So, ext4 is not _that_ new anymore. One year latter that Fedora 12 and Ubuntu 9.10 began using ext4 as default.
I believe for 5.5 or even on 5.6, ext4 will not be a tech preview anymore. Considering that RH has extended the support so much, and how ext3 is so limited with the current and future disk's capacities (fsck on a 1TB volume is not funny). The current ext4 module is close to the one on 2.6.29 plus lots of fixes [2]
[1] http://git.kernel.org/?p=linux/kernel/git/torvalds/linux-2.6.git;a=commit;h=... [2] rpm -q --changelog kernel|grep ext4
My leaning is that 5.4 would be a bit too soon for production data, unless you have a very specific need and very good backups. But it's darned close to ready.
Waiting until 5.5 or 5.6 (or 6.0) or at least waiting until next spring sounds like a reasonable middle ground. That gives the Ubuntu and FC hordes time to beat on it in less controlled settings.