I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I can say if I have ext4 partitions on the production server, but my personal desktop is running ext4 partitions on secondary HDD on top of the RAID1 (system started back on 5.3 so primary HDD was not touched) and so far I have not seen any issues.
Ljubomir
PJ writes:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
I use it in production wih several TB on top of mdraid+lvm, no problems so far. Nice and fast; love the online resize feature. Go for it.
-- Nux! www.nux.ro
Just another happy camper here. We have ext4 for some high-volume servers and have experienced no operational problems.
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first server was setup 3 years ago. It is quite faster then XFS in terms of write performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem for you.
Regards, Marian Marinov
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first server was setup 3 years ago. It is quite faster then XFS in terms of write performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem for you.
Regards, Marian Marinov
Thanks for all the great replies everyone.
I've got an 18TB partition - the limit is 16TB even in x86_64?
On Thu, Jun 23, 2011 at 12:31 PM, PJ pauljerome@gmail.com wrote:
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first server was setup 3 years ago. It is quite faster then XFS in terms of write performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem for you.
Regards, Marian Marinov
Thanks for all the great replies everyone.
I've got an 18TB partition - the limit is 16TB even in x86_64?
Answering my own question yes, 16TB is the limit. Has anyone here successfully compiled their own version of e2fsprogs that works over 16TB?
Looking at https://ext4.wiki.kernel.org/index.php/Ext4_Howto it says: "The code to create file systems bigger than 16 TiB is, at the time of writing this article, not in any stable release of e2fsprogs. It will be in future releases."
Not sure if the wiki is out of date or not...
Thanks!
On Thursday 23 June 2011 22:41:50 PJ wrote:
On Thu, Jun 23, 2011 at 12:31 PM, PJ pauljerome@gmail.com wrote:
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first server was setup 3 years ago. It is quite faster then XFS in terms of write performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem for you.
Regards, Marian Marinov
Thanks for all the great replies everyone.
I've got an 18TB partition - the limit is 16TB even in x86_64?
Answering my own question yes, 16TB is the limit. Has anyone here successfully compiled their own version of e2fsprogs that works over 16TB?
Looking at https://ext4.wiki.kernel.org/index.php/Ext4_Howto it says: "The code to create file systems bigger than 16 TiB is, at the time of writing this article, not in any stable release of e2fsprogs. It will be in future releases."
Not sure if the wiki is out of date or not...
What I have seen is only a alpha/beta quality code that adds this functionality.
I would not suggest that you use those patches. At least not on a production machine. I only wanted to mention that there is such code... not that it is actually working :)
Marian
Thanks! _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Thu, Jun 23, 2011 at 12:49 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 22:41:50 PJ wrote:
On Thu, Jun 23, 2011 at 12:31 PM, PJ pauljerome@gmail.com wrote:
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first server was setup 3 years ago. It is quite faster then XFS in terms of write performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem for you.
Regards, Marian Marinov
Thanks for all the great replies everyone.
I've got an 18TB partition - the limit is 16TB even in x86_64?
Answering my own question yes, 16TB is the limit. Has anyone here successfully compiled their own version of e2fsprogs that works over 16TB?
Looking at https://ext4.wiki.kernel.org/index.php/Ext4_Howto it says: "The code to create file systems bigger than 16 TiB is, at the time of writing this article, not in any stable release of e2fsprogs. It will be in future releases."
Not sure if the wiki is out of date or not...
What I have seen is only a alpha/beta quality code that adds this functionality.
I would not suggest that you use those patches. At least not on a production machine. I only wanted to mention that there is such code... not that it is actually working :)
Marian
Thanks!
Thanks Marian, it looks like it's 2 x 9TB partitions for me, what a pain in the ass!
On Thu, Jun 23, 2011 at 01:04:34PM -0700, PJ wrote:
On Thu, Jun 23, 2011 at 12:49 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 22:41:50 PJ wrote:
On Thu, Jun 23, 2011 at 12:31 PM, PJ pauljerome@gmail.com wrote:
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
<big snip>
Thanks Marian, it looks like it's 2 x 9TB partitions for me, what a pain in the ass!
Here be dragons:
If you're running a database on it, you might re-think using a journaled filesystem. For that, ext2 will be faster and much less prone to unrecoverable data loss.
If you're running on large spindles, benchmark the performance during a rebuild of one drive. Yank a drive for a moment and watch performance fall off a cliff until the RAID is made whole.
Exercize the storage using dt and fsopbench. If it survives them intact you have little to fear.
dt: http://rhn.redhat.com/errata/RHEA-2005-872.html fsopbench: http://insights.oetiker.ch/linux/fsopbench/
BTW, how long does a restoring a 9TB partition from tape run? Is it longer than your SLA? I'd want to know the answer before putting it into production.
On Tue, Jul 5, 2011 at 3:05 AM, Charles Polisher cpolish@surewest.netwrote:
If you're running a database on it, you might re-think using a journaled filesystem. For that, ext2 will be faster and much less prone to unrecoverable data loss.
Did you mean EXT4, or in actual fact EXT2? I thought EXT4 was faster than EXT2?
-- Charles Polisher
If you're running a database on it, you might re-think using a journaled filesystem. For that, ext2 will be faster and much less prone to unrecoverable data loss.
Did you mean EXT4, or in actual fact EXT2? I thought EXT4 was faster than EXT2?
The optimum on an EXT basis for a filesystem that does not require journaling going forwards would be EXT4 with no journal... that way you get the benefit of extents etc without a journal slowing you down.... A better option than EXT2 ;)
On Tuesday, July 05, 2011 07:28 PM, James Hogarth wrote:
If you're running a database on it, you might re-think using a journaled filesystem. For that, ext2 will be faster and much less prone to unrecoverable data loss.
Did you mean EXT4, or in actual fact EXT2? I thought EXT4 was faster than EXT2?
The optimum on an EXT basis for a filesystem that does not require journaling going forwards would be EXT4 with no journal... that way you get the benefit of extents etc without a journal slowing you down.... A better option than EXT2 ;)
Test, test, and test again for your own particular case.
Christopher Chan wrote:
James Hogarth wrote:
If you're running a database on it, you might re-think using a journaled filesystem. For that, ext2 will be faster and much less prone to unrecoverable data loss.
Did you mean EXT4, or in actual fact EXT2? I thought EXT4 was faster than EXT2?
In general and with some simplifiying assumptions, a database consists of statically pre-allocated files. The process of extending the files happens at birth. The relative speed over the lifetime of the database is dominated by raw I/O, not by extending the files.
The optimum on an EXT basis for a filesystem that does not require journaling going forwards would be EXT4 with no journal... that way you get the benefit of extents etc without a journal slowing you down.... A better option than EXT2 ;)
Test, test, and test again for your own particular case.
Couldn't agree more!
A reminder that blind trust in filesystems is not always well placed: http://thread.gmane.org/gmane.comp.file-systems.ext4/6702
Everyone uses foo, therefore foo is what you should use: http://www.nizkor.org/features/fallacies/appeal-to-popularity.html
Important Person uses foo, therefore foo is what you should use: http://www.nizkor.org/features/fallacies/appeal-to-authority.html
I've been using foo for years in production with no problems: maybe http://www.nizkor.org/features/fallacies/composition.html
(I'm sharpening my axe for the "Use ZFS, it's bulletproof" discussion.)
On Tue, Jul 5, 2011 at 4:10 PM, Charles Polisher cpolish@surewest.netwrote:
(I'm sharpening my axe for the "Use ZFS, it's bulletproof" discussion.)
-- Charles Polisher
HAHA, what's your take on ZFS then?
We've been running ZFS on a few storage servers, both in the office and for our hosting clients for about 2 years now and all I can say it that it's rock solid.
With raidz2 (similar to RAID6) we've never had any data loss or corruption due to hard drive failure and long rebuilds. And if you use SSD for ZIL & LARC2 cache, it's super fast. the same systems with EXT3 simply couldn't match the performance we got got from ZFS.
BUT, since we're not allowed to talk about anything else other than CentOS on this list people don't mention it.
On Tuesday, July 05, 2011 10:26 PM, Rudi Ahlers wrote:
(I'm sharpening my axe for the "Use ZFS, it's bulletproof" discussion.)
/me puts on asbestos suit...stares...switches to asbestos armor instead.
HAHA, what's your take on ZFS then?
We've been running ZFS on a few storage servers, both in the office and for our hosting clients for about 2 years now and all I can say it that it's rock solid.
+1
Although I have seen screams from others on the opensolaris/openindiana lists I personally have not experienced them.
With raidz2 (similar to RAID6) we've never had any data loss or corruption due to hard drive failure and long rebuilds. And if you use SSD for ZIL & LARC2 cache, it's super fast. the same systems with EXT3 simply couldn't match the performance we got got from ZFS.
I take it you limit your raidz2 arrays to a maximum of 9 drives?
/me wonders what an md raid array with an ext3 fs that has its journal on an ssd in full data journal mode give in terms of performance.
I would not give zfs the performance crown just yet. Have you tried using ext3 with an external journal on the ssd and ext3 on raid6? What kind of usage pattern do you have on those zfs filesystems?
BUT, since we're not allowed to talk about anything else other than CentOS on this list people don't mention it.
I find that this list is generally tolerant of offtopic but technical topics. What it does not like is flamewars made of posts that have zero technical merit.
On Tue, Jul 5, 2011 at 4:46 PM, Christopher Chan christopher.chan@bradbury.edu.hk wrote:
We've been running ZFS on a few storage servers, both in the office and for our hosting clients for about 2 years now and all I can say it that it's rock solid.
+1
Although I have seen screams from others on the opensolaris/openindiana lists I personally have not experienced them.
True, but that you'll get in any industry, with any product - even EMC or other similar large and expensive equipment vendors :)
With raidz2 (similar to RAID6) we've never had any data loss or corruption due to hard drive failure and long rebuilds. And if you use SSD for ZIL & LARC2 cache, it's super fast. the same systems with EXT3 simply couldn't match the performance we got got from ZFS.
I take it you limit your raidz2 arrays to a maximum of 9 drives?
Yes, in a 12-bay chassis, we use: 1x L2ARC SSD 2x ZIL SSD mirrored 9x SATA / SAS (depending on application) drives in raidz2, which effectively gives us 7 usable drives
I am contemplating using 24bay chassis instead, purely from a cost (hardware, power, cooling) point of view and then using 2x raidz2 pools.
/me wonders what an md raid array with an ext3 fs that has its journal on an ssd in full data journal mode give in terms of performance.
I honestly haven't tried this yet, probably cause when I looked at how this works, it's only the journal which runs on SSD, so reads won't benefit much from it, only reads. But with ZFS you get all the data, read & write, cached on SSD
I would not give zfs the performance crown just yet. Have you tried using ext3 with an external journal on the ssd and ext3 on raid6? What kind of usage pattern do you have on those zfs filesystems?
It's generally shared & VPS hosting, so basically: websites, email, databases, logs, etc :)
BUT, since we're not allowed to talk about anything else other than CentOS on this list people don't mention it.
I find that this list is generally tolerant of offtopic but technical topics. What it does not like is flamewars made of posts that have zero technical merit.
True, but have you seen how quickly some "grumpy mailing list activist" can derail a thread with: "please don't post OT stuff here" and then the whole conversation just dies down......
On 7/5/2011 1:06 PM, Rudi Ahlers wrote:
/me wonders what an md raid array with an ext3 fs that has its journal on an ssd in full data journal mode give in terms of performance.
I honestly haven't tried this yet, probably cause when I looked at how this works, it's only the journal which runs on SSD, so reads won't benefit much from it, only reads. But with ZFS you get all the data, read& write, cached on SSD
How much can that matter? Reads are going to be cached in main RAM anyway - which is pretty cheap these days.
On Tue, Jul 5, 2011 at 8:20 PM, Les Mikesell lesmikesell@gmail.com wrote:
How much can that matter? Reads are going to be cached in main RAM anyway - which is pretty cheap these days.
-- Les Mikesell lesmikesell@gmail.com _______________________________________________
Yes, but I suppose it all depends on the needs of the server in question :)
In our case, with web servers, reads (i.e. opening websites, downloading content) far outweighs writes (which are basically logs, file uploads, and sessions being written to disk.
In case of forums (we have many clients with forums) reads & writes are sometimes equal, but even then reads are still more common in our case than writes.
On 7/5/2011 1:30 PM, Rudi Ahlers wrote:
On Tue, Jul 5, 2011 at 8:20 PM, Les Mikeselllesmikesell@gmail.com wrote:
How much can that matter? Reads are going to be cached in main RAM anyway - which is pretty cheap these days.
Yes, but I suppose it all depends on the needs of the server in question :)
In our case, with web servers, reads (i.e. opening websites, downloading content) far outweighs writes (which are basically logs, file uploads, and sessions being written to disk.
In case of forums (we have many clients with forums) reads& writes are sometimes equal, but even then reads are still more common in our case than writes.
But it doesn't matter if you lose the read cache in RAM - and the OS is going to keep a copy there as long as it can anyway. The point of SSD caching of journals/writes is that it survives a reboot. If you have a lot more SSD than spare RAM it might save a few seeks as a side effect but why not just add RAM if that matters?
On Tue, Jul 5, 2011 at 8:49 PM, Les Mikesell lesmikesell@gmail.com wrote:
On 7/5/2011 1:30 PM, Rudi Ahlers wrote:
On Tue, Jul 5, 2011 at 8:20 PM, Les Mikeselllesmikesell@gmail.com
wrote:
How much can that matter? Reads are going to be cached in main RAM anyway - which is pretty cheap these days.
Yes, but I suppose it all depends on the needs of the server in question
:)
In our case, with web servers, reads (i.e. opening websites, downloading content) far outweighs writes (which are basically logs, file uploads, and sessions being written to disk.
In case of forums (we have many clients with forums) reads& writes are sometimes equal, but even then reads are still more common in our case than writes.
But it doesn't matter if you lose the read cache in RAM - and the OS is going to keep a copy there as long as it can anyway. The point of SSD caching of journals/writes is that it survives a reboot. If you have a lot more SSD than spare RAM it might save a few seeks as a side effect but why not just add RAM if that matters?
-- Les Mikesell lesmikesell@gmail.com
It's not always easy, or even possible to add more RAM, especially since the storage servers weren't fitted with motherboards that can take more than say 8 or 16GB RAM .
On 07/05/11 7:10 AM, Charles Polisher wrote:
In general and with some simplifiying assumptions, a database consists of statically pre-allocated files. The process of extending the files happens at birth. The relative speed over the lifetime of the database is dominated by raw I/O, not by extending the files.
thats not even remotely true of many databases. PostgreSQL, for example, the files are extended as they are updated/inserted, as are the WAL files.
On 07/05/11 7:10 AM, Charles Polisher wrote:
In general and with some simplifiying assumptions, a database consists of statically pre-allocated files. The process of extending the files happens at birth. The relative speed over the lifetime of the database is dominated by raw I/O, not by extending the files.
thats not even remotely true of many databases. PostgreSQL, for example, the files are extended as they are updated/inserted, as are the WAL files.
The PostgreSQL wiki seems to say that database tables are allocated in 1GB extents. In workloads with which I am familiar, with an RDBMS the extents don't bounce around all that much, i.e. the vast majority of writes do not result in a change to the underlying database's storage allocation. Once in a while a new extent is allocated. http://www.postgresql.org/docs/current/static/storage-file-layout.html I suppose there could be exceptions, but I haven't run across one personally.
The "WAL" files you refer to are apparently database transaction logs. According to the wiki, these too are allocated in extents (WAL segments) of 16MB each.
I am not persuaded that the point I was making was erroneous.
On 07/05/11 9:04 PM, Charles Polisher wrote:
The PostgreSQL wiki seems to say that database tables are allocated in 1GB extents. In workloads with which I am familiar, with an RDBMS the extents don't bounce around all that much, i.e. the vast majority of writes do not result in a change to the underlying database's storage allocation. Once in a while a new extent is allocated. http://www.postgresql.org/docs/current/static/storage-file-layout.html I suppose there could be exceptions, but I haven't run across one personally.
you misread that.
When a table or index exceeds 1 GB, it is divided into gigabyte-sized/segments/. The first segment's file name is the same as the filenode; subsequent segments are named filenode.1, filenode.2, etc. This arrangement avoids problems on platforms that have file size limitations. ...
Each file is no larger than 1GB (by default), but its written and expanded as needed, not in any fixed size increment.
The "WAL" files you refer to are apparently database transaction logs. According to the wiki, these too are allocated in extents (WAL segments) of 16MB each.
The wal logs are 16M files, also written sequentially as needed, and nearly continuously on a insert/update intensive database. they are not reused, rather, old wal files are deleted (unless you're archiving), and new ones are created continuously.
On Wed, Jul 6, 2011 at 6:17 AM, John R Pierce pierce@hogranch.com wrote:
On 07/05/11 9:04 PM, Charles Polisher wrote:
The PostgreSQL wiki seems to say that database tables are allocated in 1GB extents. In workloads with which I am familiar, with an RDBMS the extents don't bounce around all that much, i.e. the vast majority of writes do not result in a change to the underlying database's storage allocation. Once in a while a new extent is allocated. http://www.postgresql.org/docs/current/static/storage-file-layout.html I suppose there could be exceptions, but I haven't run across one personally.
you misread that.
When a table or index exceeds 1 GB, it is divided into gigabyte-sized/segments/. The first segment's file name is the same as the filenode; subsequent segments are named filenode.1, filenode.2, etc. This arrangement avoids problems on platforms that have file size limitations. ...
Each file is no larger than 1GB (by default), but its written and expanded as needed, not in any fixed size increment.
The "WAL" files you refer to are apparently database transaction logs. According to the wiki, these too are allocated in extents (WAL segments) of 16MB each.
The wal logs are 16M files, also written sequentially as needed, and nearly continuously on a insert/update intensive database. they are not reused, rather, old wal files are deleted (unless you're archiving), and new ones are created continuously.
-- john r pierce N 37, W 122 santa cruz ca mid-left coast
Hi Everyone,
I just tried to install EXT4 onto a CentOS 5 machine but it failed. Does anyone know in which repository it is?
root@usaxen01:[~]$ cat /etc/redhat-release CentOS release 5 (Final)
root@usaxen01:[~]$ uname -a Linux usaxen01 2.6.18-8.1.15.el5xen #1 SMP Mon Oct 22 09:01:12 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
root@usaxen01:[~]$ yum -y install e4fsprogs Loading "installonlyn" plugin Setting up Install Process Setting up repositories Reading repository metadata in from local files Excluding Packages in global exclude list Finished Parsing package install arguments Nothing to do
Rudi Ahlers wrote:
Hi Everyone,
I just tried to install EXT4 onto a CentOS 5 machine but it failed. Does anyone know in which repository it is?
root@usaxen01:[~]$ cat /etc/redhat-release CentOS release 5 (Final)
root@usaxen01:[~]$ uname -a Linux usaxen01 2.6.18-8.1.15.el5xen #1 SMP Mon Oct 22 09:01:12 EDT 2007 x86_64 x86_64 x86_64 GNU/Linux
Your kernel is old: "Starting with kernel-2.6.18-128.el5, ext4 support is enabled.". You should be on CentOS 5.3 at least.
And be careful to leave boot partition on ext3.
Ljubomir
Am 23.07.2011 00:16, schrieb Matt:
And be careful to leave boot partition on ext3.
Why is that? Also, does the 5.6 install dvd offer ext4 in the graphical install? Last time I do not think I saw it.
grub shipping with 5.6 can not boot from ext4 formatted partitions.
http://docs.redhat.com/docs/en-US/Red_Hat_Enterprise_Linux/5/html-single/5.6...
"As of Red Hat Enterprise Linux 5.6 the ext4 file system is fully supported. However, provisioning ext4 file systems with the anaconda installer is not supported, and ext4 file systems need to be provisioned manually after the installation."
Alexander
On 7/5/11 6:28 AM, James Hogarth wrote:
If you're running a database on it, you might re-think using a journaled filesystem. For that, ext2 will be faster and much less prone to unrecoverable data loss.
Did you mean EXT4, or in actual fact EXT2? I thought EXT4 was faster than EXT2?
The optimum on an EXT basis for a filesystem that does not require journaling going forwards would be EXT4 with no journal... that way you get the benefit of extents etc without a journal slowing you down.... A better option than EXT2 ;)
Won't that mean that starting up after a crash or power loss will always require an fsck which might be slow depending on how many files are on the partition?
On Thursday 23 June 2011 22:31:28 PJ wrote:
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first server was setup 3 years ago. It is quite faster then XFS in terms of write performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem for you.
Regards, Marian Marinov
Thanks for all the great replies everyone.
I've got an 18TB partition - the limit is 16TB even in x86_64?
Yes. At least it was so, last year. I haven't checked recently. And I don't have a spare machine to repartition for the test. We have a 30TB RAID6 array and I was really annoyed that I had to make two partitions to utilze the whole space.
The wiki pages are still not updated: http://en.wikipedia.org/wiki/Comparison_of_file_systems https://ext4.wiki.kernel.org/index.php/Ext4_Howto
NOTE: Although very large fileystems are on ext4's feature list, current e2fsprogs currently still limits the filesystem size to 2^32 blocks (16TiB for a 4KiB block filesystem). Allowing filesystems larger than 16T is one of the very next high-priority features to complete for ext4.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
We have a single 27TB partition (35 x 1TB drives as RAID5+0 in an HP MDS600), just formatted it xfs and had no problems with it so far. It's used as scratch space so not too concerned about performance.
--Russell
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Marian Marinov Sent: Friday, 24 June 2011 7:48 a.m. To: CentOS mailing list Subject: Re: [CentOS] ext4 in CentOS 5.6?
On Thursday 23 June 2011 22:31:28 PJ wrote:
On Thu, Jun 23, 2011 at 1:07 PM, Marian Marinov mm@yuhu.biz wrote:
On Thursday 23 June 2011 19:16:37 PJ wrote:
I'm sure many are running ext4 FS's in production, but just want
to
be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I'm running some 50 servers with ext4 each server has 2x15TB ext4 partitions. I haven't had an issue with that setup. The first
server
was setup 3 years ago. It is quite faster then XFS in terms of
write
performance and thus far reliable without any major problem.
Keep in mind that user land tools are limited and the biggest partition you can create with them at the moment is 16TB. You can recompile the tools and remove this limitation if that is a problem
for you.
Regards, Marian Marinov
Thanks for all the great replies everyone.
I've got an 18TB partition - the limit is 16TB even in x86_64?
Yes. At least it was so, last year. I haven't checked recently. And I don't have a spare machine to repartition for the test. We have a 30TB RAID6 array and I was really annoyed that I had to make two partitions to utilze the whole space.
The wiki pages are still not updated: http://en.wikipedia.org/wiki/Comparison_of_file_systems https://ext4.wiki.kernel.org/index.php/Ext4_Howto
NOTE: Although very large fileystems are on ext4's feature list, current e2fsprogs currently still limits the filesystem size to 2^32 blocks (16TiB for a 4KiB block filesystem). Allowing filesystems larger than 16T is one of the very next high-priority features to complete for ext4.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
-- Best regards, Marian Marinov
======================================================================= Attention: The information contained in this message and/or attachments from AgResearch Limited is intended only for the persons or entities to which it is addressed and may contain confidential and/or privileged material. Any review, retransmission, dissemination or other use of, or taking of any action in reliance upon, this information by persons or entities other than the intended recipients is prohibited by AgResearch Limited. If you have received this message in error, please notify the sender immediately. =======================================================================
On Friday 24 June 2011 04:34:20 Smithies, Russell wrote:
We have a single 27TB partition (35 x 1TB drives as RAID5+0 in an HP MDS600), just formatted it xfs and had no problems with it so far. It's used as scratch space so not too concerned about performance.
--Russell
I have compared the performance of both XFS and Ext4. And since I use those big machines for backups, for me the write performance was very important. XFS was almost twice slower.
But lets leave XFS alone :) Ext4 is the way to go :)
Marian
On 6/23/11 8:44 PM, Marian Marinov wrote:
I have compared the performance of both XFS and Ext4. And since I use those big machines for backups, for me the write performance was very important. XFS was almost twice slower.
Twice slower? At what kind of operations? I don't think any filesystem should have that much overhead at things like writing large files.
Le 24/06/2011 03:44, Marian Marinov a écrit :
On Friday 24 June 2011 04:34:20 Smithies, Russell wrote:
We have a single 27TB partition (35 x 1TB drives as RAID5+0 in an HP MDS600), just formatted it xfs and had no problems with it so far. It's used as scratch space so not too concerned about performance.
--Russell
I have compared the performance of both XFS and Ext4. And since I use those big machines for backups, for me the write performance was very important. XFS was almost twice slower.
But lets leave XFS alone :) Ext4 is the way to go :)
Marian
I am using XFS on an HPC cluster, one single partition of 14 TB, with no problem so far.
See this news on Phoronix. XFS is becoming cleaner and leaner. I am happy to use ext4 instead of ext3 on usual partitions, but XFS on big partitions seems to me still a good choice. Let's see what happens in the future. http://www.phoronix.com/scan.php?page=news_item&px=OTU4OA
Alain
On Fri, Jun 24, 2011 at 8:45 AM, Alain Péan alain.pean@lpp.polytechnique.fr wrote:
Le 24/06/2011 03:44, Marian Marinov a écrit :
On Friday 24 June 2011 04:34:20 Smithies, Russell wrote:
We have a single 27TB partition (35 x 1TB drives as RAID5+0 in an HP MDS600), just formatted it xfs and had no problems with it so far. It's used as scratch space so not too concerned about performance.
--Russell
I have compared the performance of both XFS and Ext4. And since I use those big machines for backups, for me the write performance was very important. XFS was almost twice slower.
But lets leave XFS alone :) Ext4 is the way to go :)
Marian
I am using XFS on an HPC cluster, one single partition of 14 TB, with no problem so far.
See this news on Phoronix. XFS is becoming cleaner and leaner. I am happy to use ext4 instead of ext3 on usual partitions, but XFS on big partitions seems to me still a good choice. Let's see what happens in the future. http://www.phoronix.com/scan.php?page=news_item&px=OTU4OA
Alain
Btrfs happens in the future. :-)
On Jun 23, 2011, at 9:44 PM, Marian Marinov mm@yuhu.biz wrote:
On Friday 24 June 2011 04:34:20 Smithies, Russell wrote:
We have a single 27TB partition (35 x 1TB drives as RAID5+0 in an HP MDS600), just formatted it xfs and had no problems with it so far. It's used as scratch space so not too concerned about performance.
--Russell
I have compared the performance of both XFS and Ext4. And since I use those big machines for backups, for me the write performance was very important. XFS was almost twice slower.
But lets leave XFS alone :) Ext4 is the way to go :)
I use XFS for ESX NFS datastores and performance is very good for myself.
Though I know that XFS will only perform as good as the underlying physical storage and Ext4 may use more aggressive caching which makes less performant storage seem better.
-Ross
On Jun 23, 2011, at 12:16 PM, PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Thanks in advance!
I've seen some interesting behavior from "df" on an ext4 file system just today, on a fully-patched CentOS 5.6 system. I was running "watch -d -n 1 df -b G" while copying several TB around. One second, "df" would report that 1600 GB were in use. The next, I'd be up to 2500 GB; and then, over 3000 GB. Then it would drop down to 1200 GB and start counting up again. The amount of disk space actually in use as reported by "du" was closer to 600 GB. I should mention, that is just a sample of the observed behavior. It seemed like "df" would start this fluctuation cycle at the correct number, and I would sometimes catch it there; but it would be off by a couple TB before it re-cycled. Is this something anyone else has seen? This was a new 10 TB file system formatted for ext4 directly, rather than formatted as ext3 and converted to ext4.
I've also noticed that I seem to lose more disk space to general overhead than I did with ext3. I'm not talking about the space reserved for root - I've set "-m 0" in both cases. I mean that on an 8 GB logical volume, the formatted size would be 7.8 GB under ext3, versus 7.5 GB with ext4. A 16 GB file system in ext3 would be 15 GB in ext4. Does that match expectations?
Thanks,
James
On 6/23/2011 12:16 PM, PJ wrote:
I'm sure many are running ext4 FS's in production, but just want to be re-assured that there are not currently any major issues before starting a new project that looks like it will be using ext4.
I've previously been using xfs but the software for this project requires ext3/ext4.
I'm always very cautious before jumping onto a new FS, (new in the sense it is officially supported now)
Works fine here. I think you would have been jumping the gun if you were asking this in 2009, but by now it's well understood and the tools are fine in 2011. It's been around long enough.
I use it anywhere that I have multi-gigabyte files that need to be handled with speed (deleting large files on ext3 is an exercise in patience) or where I have lots and lots of little files (which ext3 sometimes had trouble with).