Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
Thank you, Adrian
Hi,
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
Thank you, Adrian
I personally would go XFS. Made the very best experiences of any filesystems I ever used (well, it was XFS on IRIX).
Timo
2009/5/6 Timo Schoeler timo.schoeler@riscworks.net:
Hi,
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
Thank you, Adrian
I personally would go XFS. Made the very best experiences of any filesystems I ever used (well, it was XFS on IRIX).
Timo
Same here. Make sure to use an UPS though. Laurent.
Adrian Sevcenco schrieb:
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
Thank you, Adrian
Does anybody actually run such a thing on Linux?
How long does a FSCK take once it's 80% populated? How much RAM does that need?
The FSCK on my Virtuozzo-partition takes long enough - and it's only 500 GB or so.
Rainer
Rainer Duffner wrote:
Adrian Sevcenco schrieb:
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
Thank you, Adrian
Does anybody actually run such a thing on Linux?
We will .. 2 X RAID6 each with 12 drives (24 drives machine) with 2 TB drives .. that is 20 TBs each volume
How long does a FSCK take once it's 80% populated?
i strongly hope that i will never know :)) it have 2 redundant PSU each on different ups ...
How much RAM does that need?
minimal .. is an storage only machine so 4 GB is enough as the connection is only GigE
The FSCK on my Virtuozzo-partition takes long enough - and it's only 500 GB or so.
Even for home its efficient to have an ups for each machine..
Adrian
Adrian Sevcenco schrieb:
Rainer Duffner wrote:
Adrian Sevcenco schrieb:
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
Thank you, Adrian
Does anybody actually run such a thing on Linux?
We will ..
That's not what I was asking ;-)
2 X RAID6 each with 12 drives (24 drives machine) with 2 TB drives .. that is 20 TBs each volume
How long does a FSCK take once it's 80% populated?
i strongly hope that i will never know :))
It will fsck every n'th reboot anyway, or after so-and-so many days without fsck, after a reboot.
it have 2 redundant PSU each on different ups ...
How much RAM does that need?
minimal .. is an storage only machine so 4 GB is enough as the connection is only GigE
I asked about the FSCK. Usually, it requires some RAM, too.
The FSCK on my Virtuozzo-partition takes long enough - and it's only 500 GB or so.
Even for home its efficient to have an ups for each machine..
It's running in a datacenter with UPSs. But once I reboot it, it it's the "fsck-every-n-days" thing.
I don't think it's a good idea to disable that behaviour.
Rainer
-----BEGIN PGP SIGNED MESSAGE----- Hash: SHA1
tthus Rainer Duffner spake: | Adrian Sevcenco schrieb: |> Rainer Duffner wrote: |> |>> Adrian Sevcenco schrieb: |>> |>>> Hi, |>>> What would you recommend as an FS for an partition greater than 16 TiB? |>>> This is for an production server (that is, no ext4 recommendations |>>> please :) ) |>>> What experiences did you had with your preferred FS ? (good and not so |>>> good points) |>>> |>>> Thank you, |>>> Adrian |>>> |>>> |>> Does anybody actually run such a thing on Linux? |>> |> We will .. | | That's not what I was asking ;-) | | |> 2 X RAID6 each with 12 drives (24 drives machine) |> with 2 TB drives .. that is 20 TBs each volume |> |> |>> How long does a FSCK take once it's 80% populated? |>> |> i strongly hope that i will never know :)) |> | | It will fsck every n'th reboot anyway, or after so-and-so many days | without fsck, after a reboot. | |> it have 2 redundant PSU each on different ups ... |> |> |>> How much RAM does that need? |>> |> minimal .. is an storage only machine so 4 GB is enough as the |> connection is only GigE |> |> | | I asked about the FSCK. | Usually, it requires some RAM, too. | | |>> The FSCK on my Virtuozzo-partition takes long enough - and it's only 500 |>> GB or so. |>> |> Even for home its efficient to have an ups for each machine.. |> | | | It's running in a datacenter with UPSs. But once I reboot it, it it's | the "fsck-every-n-days" thing. | | I don't think it's a good idea to disable that behaviour. | | | | | Rainer
To shorten the fsck discussion with XFS (quoting the man page):
fsck.xfs(8)
NAME ~ fsck.xfs - do nothing, successfully
SYNOPSIS ~ fsck.xfs [ filesys ... ]
DESCRIPTION ~ fsck.xfs is called by the generic Linux fsck(8) program at startup to ~ check and repair an XFS filesystem. XFS is a journaling filesystem and ~ performs recovery at mount(8) time if necessary, so fsck.xfs simply ~ exits with a zero exit status.
~ If you wish to check the consistency of an XFS filesystem, or repair a ~ damaged or corrupt XFS filesystem, see xfs_check(8) and xfs_repair(8).
FILES ~ /etc/fstab.
SEE ALSO ~ fsck(8), fstab(5), xfs(5), xfs_check(8), xfs_repair(8).
See also: http://en.wikipedia.org/wiki/XFS
HTH,
Timo
Rainer Duffner wrote:
It's running in a datacenter with UPSs. But once I reboot it, it it's the "fsck-every-n-days" thing.
I don't think it's a good idea to disable that behaviour.
Hmmm. xfs will not do that, the normal behaviour is not check the file system on every nth reboot. I normally have turned that off with ext3, too - if the system goes down unexpectedly, then I normally do one.
BTW: ext3 handles "Out of power" corruptions better than xfs does.
Ralph
-----Original Message----- From: centos-bounces@centos.org [mailto:centos-bounces@centos.org] On Behalf Of Ralph Angenendt Sent: Wednesday, May 06, 2009 3:55 AM To: centos@centos.org Subject: Re: [CentOS] fs for > 16 TiB partition
Rainer Duffner wrote:
It's running in a datacenter with UPSs. But once I reboot
it, it it's
the "fsck-every-n-days" thing.
I don't think it's a good idea to disable that behaviour.
Hmmm. xfs will not do that, the normal behaviour is not check the file system on every nth reboot. I normally have turned that off with ext3, too - if the system goes down unexpectedly, then I normally do one.
BTW: ext3 handles "Out of power" corruptions better than xfs does.
Ralph
While the fsck.xfs essentially is nothing more than a /bin/true, do note that if you EVER need to run xfs_check or xfs_repair, you need mre than a fair amount of RAM to properly check the volume. See the following URL for the gory details: http://oss.sgi.com/archives/xfs/2005-08/msg00391.html
-- Gary L. Greene, Jr. IT Operations Minerva Networks, Inc. Cell: (650) 704-6633 Phone: (408) 240-1239
Ralph Angenendt wrote:
BTW: ext3 handles "Out of power" corruptions better than xfs does.
power failure is -not- the only cause of this sort of condition... noone here has ever had a kernel panic? I had a perfectly good server panic shortly after a cooling fan had failed combined with above-normal room temperatures due to somewhat overloaded HVAC. CPU got hot (but not hot enough to trigger its thermal trip) and get some kind of cache error that was a fatal bugcheck in the CPU.
I'm somewhat surprised the Linux community hasn't embraced IBM's JFS... I know its supported in many distributions (but not RH natively), however its out there in the "also runs" department. Being rather conservative by nature, I've only used JFS with AIX due to this, but found it to be a -very- robust file system with very good all around performance in a wide range of scenarios (really big files, as well as really large numbers of small files).
John R Pierce wrote:
Ralph Angenendt wrote:
BTW: ext3 handles "Out of power" corruptions better than xfs does.
power failure is -not- the only cause of this sort of condition... noone here has ever had a kernel panic? I had a perfectly good server panic shortly after a cooling fan had failed combined with above-normal room temperatures due to somewhat overloaded HVAC. CPU got hot (but not hot enough to trigger its thermal trip) and get some kind of cache error that was a fatal bugcheck in the CPU.
I'm somewhat surprised the Linux community hasn't embraced IBM's JFS... I know its supported in many distributions (but not RH natively), however its out there in the "also runs" department. Being rather conservative by nature, I've only used JFS with AIX due to this, but found it to be a -very- robust file system with very good all around performance in a wide range of scenarios (really big files, as well as really large numbers of small files).
Same here (however, I was putting XFS into game first); JFS on AIX is very stable.
However, one has to clearly distinct JFS from JFS2... (at least in the AIX world, see wikipedia which states 'In the other operating systems, such as OS/2 and Linux, only the second generation exists and is called simply JFS.[3] This should not be confused with JFS in AIX that actually refers to JFS1.' [0])
HTH,
Timo
Timo Schoeler wrote:
However, one has to clearly distinct JFS from JFS2... (at least in the AIX world, see wikipedia which states 'In the other operating systems, such as OS/2 and Linux, only the second generation exists and is called simply JFS.[3] This should not be confused with JFS in AIX that actually refers to JFS1.' [0])
the AIX I worked with, 5.3L, supports both JFS[1] and JFS2... They are very similar structurally, JFS2 just supports larger volumes.
On Wed, May 6, 2009 at 10:48 AM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
We've got a 110 TB xfs system in production based on a logical volume striped over 9 boxes of SATA disk, works like a charm with great throughput as we stripe over 3 controllers :-)
Only whoopsie in 18+ months was when we recently added 3 more disk boxes and I grew the filesystem. First attempt xfs_grow only added a fraction of the available space. Second attempt gave a kernel panic. Reboot and everything was fine with all space available.
Lesson learned: don't use xfs_grow unless you're in the general vicinity of the server ;-)
regards, Bent
On Wed, May 6, 2009 at 11:27 AM, Bent Terp bent@nagstrup.dk wrote:
Lesson learned: don't use xfs_grow unless you're in the general vicinity of the server ;-)
Correction: the command is xfs_growfs not xfs_grow
/B
On Wed, May 06, 2009 at 11:27:27AM +0200, Bent Terp wrote:
On Wed, May 6, 2009 at 10:48 AM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
We've got a 110 TB xfs system in production based on a logical volume striped over 9 boxes of SATA disk, works like a charm with great throughput as we stripe over 3 controllers :-)
Are you running x86 32bit or x86_64 ?
iirc there has been problems with XFS on 32bit kernel.. stack size related or so? So 64bit has been the recommended way to go..
-- Pasi
On Thu, May 7, 2009 at 9:05 AM, Pasi Kärkkäinen pasik@iki.fi wrote:
On Wed, May 06, 2009 at 11:27:27AM +0200, Bent Terp wrote:
We've got a 110 TB xfs system in production based on a logical volume striped over 9 boxes of SATA disk, works like a charm with great throughput as we stripe over 3 controllers :-)
Are you running x86 32bit or x86_64 ?
iirc there has been problems with XFS on 32bit kernel.. stack size related or so? So 64bit has been the recommended way to go..
We run 64bit on most machines cuz they've got more than 4 gig ram. (And any besserwissers about to sound of about PAE kernels can kindly do so in another thread cuz I'm NOT listening!)
As an aside, I hadn't heard of issues with 32bit xfs but in retrospect it can see the logic in it: a lot of company-supplied code has had 64bit issues cuz it came from a 32bit environment, but SGI was never "most companies" :-)
On Thursday 07 May 2009, Bent Terp wrote:
On Thu, May 7, 2009 at 9:05 AM, Pasi Kärkkäinen pasik@iki.fi wrote:
On Wed, May 06, 2009 at 11:27:27AM +0200, Bent Terp wrote:
We've got a 110 TB xfs system in production based on a logical volume striped over 9 boxes of SATA disk, works like a charm with great throughput as we stripe over 3 controllers :-)
Are you running x86 32bit or x86_64 ?
iirc there has been problems with XFS on 32bit kernel.. stack size related or so? So 64bit has been the recommended way to go..
We run 64bit on most machines cuz they've got more than 4 gig ram. (And any besserwissers about to sound of about PAE kernels can kindly do so in another thread cuz I'm NOT listening!)
As an aside, I hadn't heard of issues with 32bit xfs but in retrospect it can see the logic in it: a lot of company-supplied code has had 64bit issues cuz it came from a 32bit environment, but SGI was never "most companies" :-)
This is not really a 32 vs. 64 bit issue. The problem is that redhat (unlike most everyone else) builds their 32-bit kernel with 4K kernel stack size and XFS almost needs 8K kernel stacks.
/Peter
thus Pasi Kärkkäinen spake:
On Wed, May 06, 2009 at 11:27:27AM +0200, Bent Terp wrote:
On Wed, May 6, 2009 at 10:48 AM, Adrian Sevcenco Adrian.Sevcenco@cern.ch wrote:
Hi, What would you recommend as an FS for an partition greater than 16 TiB? This is for an production server (that is, no ext4 recommendations please :) ) What experiences did you had with your preferred FS ? (good and not so good points)
We've got a 110 TB xfs system in production based on a logical volume striped over 9 boxes of SATA disk, works like a charm with great throughput as we stripe over 3 controllers :-)
Are you running x86 32bit or x86_64 ?
iirc there has been problems with XFS on 32bit kernel.. stack size related or so? So 64bit has been the recommended way to go..
-- Pasi
Obviously -- as SGI introduced one of the first, if not *the* first 64bit machine (R4000) _ages_ (1991!) ago... ;)