hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you ——— Ashkan R
On 04.08.2012 15:01, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you ——— Ashkan R _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
No redundancy? That's a lot of data to lose. :-)
As for your question, I'd use ext4. It has caught up a lot with XFS and it's THE file system supported by RHEL and Fedora.
thank you i have redundancy but i have simplified scenario. but i think ext4 is notbas fast as others. is it true?
——— Ashkan R On Aug 4, 2012 6:39 PM, "Nux!" nux@li.nux.ro wrote:
On 04.08.2012 15:01, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you ——— Ashkan R _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
No redundancy? That's a lot of data to lose. :-)
As for your question, I'd use ext4. It has caught up a lot with XFS and it's THE file system supported by RHEL and Fedora.
-- Sent from the Delta quadrant using Borg technology!
Nux! www.nux.ro _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 04.08.2012 15:19, ashkab rahmani wrote:
thank you i have redundancy but i have simplified scenario. but i think ext4 is notbas fast as others. is it true?
——— Ashkan R On Aug 4, 2012 6:39 PM, "Nux!" nux@li.nux.ro wrote:
On 04.08.2012 15:01, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you ——— Ashkan R _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
No redundancy? That's a lot of data to lose. :-)
As for your question, I'd use ext4. It has caught up a lot with XFS and it's THE file system supported by RHEL and Fedora.
-- Sent from the Delta quadrant using Borg technology!
Nux! www.nux.ro _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Well, I think ext4 is pretty fast. Maybe XFS has a slight edge over it in some scenarios. ZFS on linux is still highly experimental and has received close to no testing. If you are in mood for experiments EL6.3 includes BTRFS as technology preview for 64bit machines. Give it a try and let us know how it goes.
thank you. very usefull i think i'll try btrfs or jfs, i'll send you btrfs result for you.
On Sat, Aug 4, 2012 at 6:58 PM, Nux! nux@li.nux.ro wrote:
On 04.08.2012 15:19, ashkab rahmani wrote:
thank you i have redundancy but i have simplified scenario. but i think ext4 is notbas fast as others. is it true?
——— Ashkan R On Aug 4, 2012 6:39 PM, "Nux!" nux@li.nux.ro wrote:
On 04.08.2012 15:01, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you ——— Ashkan R _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
No redundancy? That's a lot of data to lose. :-)
As for your question, I'd use ext4. It has caught up a lot with XFS and it's THE file system supported by RHEL and Fedora.
-- Sent from the Delta quadrant using Borg technology!
Nux! www.nux.ro _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
Well, I think ext4 is pretty fast. Maybe XFS has a slight edge over it in some scenarios. ZFS on linux is still highly experimental and has received close to no testing. If you are in mood for experiments EL6.3 includes BTRFS as technology preview for 64bit machines. Give it a try and let us know how it goes.
-- Sent from the Delta quadrant using Borg technology!
Nux! www.nux.ro _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On 04.08.2012 15:36, ashkab rahmani wrote:
thank you. very usefull i think i'll try btrfs or jfs, i'll send you btrfs result for you.
Ilsistemista.net seems to have some good articles about filesystems. e.g. http://www.ilsistemista.net/index.php/linux-a-unix/33-btrfs-vs-ext3-vs-ext4-... Check them out.
On 08/04/2012 09:36 AM, ashkab rahmani wrote:
thank you. very usefull i think i'll try btrfs or jfs, i'll send you btrfs result for you.
On Sat, Aug 4, 2012 at 6:58 PM, Nux! nux@li.nux.ro wrote:
On 04.08.2012 15:19, ashkab rahmani wrote:
thank you i have redundancy but i have simplified scenario. but i think ext4 is notbas fast as others. is it true?
On 04.08.2012 15:01, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you
No redundancy? That's a lot of data to lose. :-)
As for your question, I'd use ext4. It has caught up a lot with XFS and it's THE file system supported by RHEL and Fedora.
Well, I think ext4 is pretty fast. Maybe XFS has a slight edge over it in some scenarios. ZFS on linux is still highly experimental and has received close to no testing. If you are in mood for experiments EL6.3 includes BTRFS as technology preview for 64bit machines. Give it a try and let us know how it goes.
Personally, I would use ext4 ... faster is not always better.
As Nux! initially said, ext4 is the OS that RHEL and Fedora support as their main file system. I would (and do) use that. The 6.3 kernel does support xfs and CentOS has the jfs tools in our extras directory, but I like tried and true over experimental.
On 2012-08-04, Johnny Hughes johnny@centos.org wrote:
As Nux! initially said, ext4 is the OS that RHEL and Fedora support as their main file system. I would (and do) use that. The 6.3 kernel does support xfs and CentOS has the jfs tools in our extras directory, but I like tried and true over experimental.
Isn't XFS on linux tried and true by now? It's always worked great for me.
Does ext4 resolve the issue of slow fsck? Recently I had a ~500GB ext3 filesystem that hadn't been checked in a while; it took over 20 minutes to fsck. Meanwhile, a few months ago I had a problematic ~10TB XFS filesystem, and it took about 1-2 hours to fsck (IIRC 1.5 hrs). This was also a reason I switched away from reiserfs (this was well before Hans Reiser's personal problems)--a reiserfsck of a relatively modest filesystem took much longer than even an ext3 fsck.
If I get some time I will try it on some spare filesystems, but I'm curious what other people's experiences are.
I've looked into ZFS on linux, but it still seems not quite ready for real production use. I'd love to test it on a less crucial server when I get the chance. Their FAQ claims RHEL 6.0 support:
http://zfsonlinux.org/faq.html
--keith
On 08/04/2012 10:05 PM, Keith Keller wrote:
On 2012-08-04, Johnny Hughes johnny@centos.org wrote:
As Nux! initially said, ext4 is the OS that RHEL and Fedora support as their main file system. I would (and do) use that. The 6.3 kernel does support xfs and CentOS has the jfs tools in our extras directory, but I like tried and true over experimental.
Isn't XFS on linux tried and true by now? It's always worked great for me.
I suppose that depends on your definition. It has only JUST become supported on RHEL ...
Does ext4 resolve the issue of slow fsck? Recently I had a ~500GB ext3 filesystem that hadn't been checked in a while; it took over 20 minutes to fsck. Meanwhile, a few months ago I had a problematic ~10TB XFS filesystem, and it took about 1-2 hours to fsck (IIRC 1.5 hrs). This was also a reason I switched away from reiserfs (this was well before Hans Reiser's personal problems)--a reiserfsck of a relatively modest filesystem took much longer than even an ext3 fsck.
If I get some time I will try it on some spare filesystems, but I'm curious what other people's experiences are.
I've looked into ZFS on linux, but it still seems not quite ready for real production use. I'd love to test it on a less crucial server when I get the chance. Their FAQ claims RHEL 6.0 support:
My experience is that I normally do not have any issues with ext3 on EL5 or ext4 on EL6 ... problems being that I can not use the system because the IO is too slow, for example.
Other people might want to pick and prod and get the top 5 or 6% performance they can out of a machine ... I would rather use what is supported unless there is a reason that I can not do it in production.
That is why it's your machine ... you get to do whatever you want ... and keep all the pieces :D
2012/8/5 Johnny Hughes johnny@centos.org:
On 08/04/2012 10:05 PM, Keith Keller wrote:
On 2012-08-04, Johnny Hughes johnny@centos.org wrote:
As Nux! initially said, ext4 is the OS that RHEL and Fedora support as their main file system. I would (and do) use that. The 6.3 kernel does support xfs and CentOS has the jfs tools in our extras directory, but I like tried and true over experimental.
Isn't XFS on linux tried and true by now? It's always worked great for me.
I suppose that depends on your definition. It has only JUST become supported on RHEL ...
Just? http://www.redhat.com/products/enterprise-linux-add-ons/file-systems/
At least it is supported on RHEL 5 also with extra price tag..
-- Eero
On 08/05/2012 04:05 AM, Keith Keller wrote:
I've looked into ZFS on linux, but it still seems not quite ready for real production use. I'd love to test it on a less crucial server when I get the chance. Their FAQ claims RHEL 6.0 support:
Excellent! Do share your test / play experience.
On 8/4/2012 9:21 AM, Johnny Hughes wrote:
ext4 is the OS that RHEL and Fedora support as their main file system. I would (and do) use that. The 6.3 kernel does support xfs and CentOS has the jfs tools in our extras directory, but I like tried and true over experimental.
xfs still has at least one big advantage over ext4 on EL6, AFAIK: supporting filesystems over 16 TB.
I am aware that ext4 is supposed to support 1 EiB filesystems[1], but due to a bug in e2fsprogs 1.41 (what EL6 ships) there is an artificial 16 TB limit[2].
I know we all like to pride ourselves on "stable" package versions, but this bug was fixed in the very next feature release of e2fsprogs, 1.42. Johnny, do you have any insight into why upstream isn't making an exception here, as they have for, say, Firefox? Surely they aren't going to hold it for EL7 and tout it as a "feature"?
Perhaps I am being unfair. Did they backport the bug fix only, and my failure to mkfs.ext4 a 22 TB partition was due to some other problem?
If someone wants me to try something, it'll have to wait at least a few weeks. That's the earliest I'm likely to have a > 16 TB RAID sitting around ready to be nuke-and-paved at a whim.
[1] https://en.wikipedia.org/wiki/Ext4 [2] http://e2fsprogs.sourceforge.net/e2fsprogs-release.html#1.42
On Sat, 2012-08-04 at 10:21 -0500, Johnny Hughes wrote:
On 08/04/2012 09:36 AM, ashkab rahmani wrote:
thank you. very usefull i think i'll try btrfs or jfs, i'll send you btrfs result for you.
On Sat, Aug 4, 2012 at 6:58 PM, Nux! nux@li.nux.ro wrote:
On 04.08.2012 15:19, ashkab rahmani wrote:
thank you i have redundancy but i have simplified scenario. but i think ext4 is notbas fast as others. is it true?
On 04.08.2012 15:01, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it? thank you
No redundancy? That's a lot of data to lose. :-)
As for your question, I'd use ext4. It has caught up a lot with XFS and it's THE file system supported by RHEL and Fedora.
Well, I think ext4 is pretty fast. Maybe XFS has a slight edge over it in some scenarios. ZFS on linux is still highly experimental and has received close to no testing. If you are in mood for experiments EL6.3 includes BTRFS as technology preview for 64bit machines. Give it a try and let us know how it goes.
Personally, I would use ext4 ... faster is not always better.
+1 ext4 is 'plenty good', bullet proof, and robustly supported.
What ext4 suffers most from is hangover impressions of its quality that have followed it from early ext3 [even later versions of ext3 were considerably better than early ext3; especially with the introduction of dir_index that solved a lot of big-folders-are-very-slow problems].
ext4 uses extents, just like XFS. It can pre-allocate just like XFS. It does delated allocation, like XFS.
If you want really good performance than putting your journal external to the filesystem, preferably on really fast storage, will probably help more than anything else. Certainly more than type-of-filesystem.
On 04.08.2012 16:36, ashkab rahmani wrote:
thank you. very usefull i think i'll try btrfs or jfs, i'll send you btrfs result for you.
Please note: The Btrfs code of CentOS 6.3 is based on kernel 2.6.32. This is very experimental.
If you want to try Btrfs, then use kernel 3.2 or higher. (there are thousands of bug fixes and improvements since 2.6.32)
Anyway, I still recommend ext4 or xfs.
Best regards,
Morten
Nux! nux@li.nux.ro wrote:
ZFS on linux is still highly experimental and has received close to no testing. If you are in mood for experiments EL6.3 includes BTRFS as technology preview for 64bit machines. Give it a try and let us know how it goes.
Using BTRFS now is like using ZFS in 2005.
ZFS is adult now, BTRFS is not.
Nux! nux@li.nux.ro wrote:
ZFS on linux is still highly experimental and has received close to no testing. If you are in mood for experiments EL6.3 includes BTRFS as technology preview for 64bit machines. Give it a try and let us know how it goes.
Using BTRFS now is like using ZFS in 2005.
ZFS is adult now, BTRFS is not.
Jörg
On 08/04/2012 05:06 PM, Joerg Schilling wrote:
Using BTRFS now is like using ZFS in 2005. ZFS is adult now, BTRFS is not
Can you quantify this in an impartial format as relevant to CentOS ? At the moment your statement is just a rant, and having come across your work in the past, I know you can do better than this.
Regards,
Karanbir Singh mail-lists@karan.org wrote:
On 08/04/2012 05:06 PM, Joerg Schilling wrote:
Using BTRFS now is like using ZFS in 2005. ZFS is adult now, BTRFS is not
Can you quantify this in an impartial format as relevant to CentOS ? At the moment your statement is just a rant, and having come across your work in the past, I know you can do better than this.
I would not call it a rant but a food for thought.
ZFS was distributed to the public after it turned 4. ZFS is now in public use since more than 7 years.
What is the age of BTRFS?
The experience with various filesystems tells that it takes 8-10 years to make a new filesystem mature.
Also the OP did not ask for CentOS, but for a filesystem comparison.
So comparing filesystems seems to be the question. For ZFS, I know that it took until three years ago to get rid of nasty bugs. At that time, ZFS was 8.
So be careful with BTRFS until it was in wide use for at least 4 years.
ZFS is the best I know for filesystems >= 2 TB and in case you need flexible snapshots. ZFS has just one single problem, it is slow in case you ask it to verify a stable FS state, UFS is much faster here, but this ZFS "problem" is true for all filesystems on Linux because of the implementation of the Linux buffer cache.
And BTW: ZFS is based on the COW ideas I made in 1988 and the NetApp patents are also just based on my master thesis without giving me credit ;-)
There are few fs use cases where COW is not the best.
Jörg
On 04.08.2012 20:32, Joerg.Schilling@fokus.fraunhofer.de wrote:
Karanbir Singh mail-lists@karan.org wrote:
On 08/04/2012 05:06 PM, Joerg Schilling wrote:
Using BTRFS now is like using ZFS in 2005. ZFS is adult now, BTRFS is not
ZFS is the best I know for filesystems >= 2 TB and in case you need flexible snapshots. ZFS has just one single problem, it is slow in case you ask it to verify a stable FS state, UFS is much faster here, but this ZFS "problem" is true for all filesystems on Linux because of the implementation of the Linux buffer cache.
And BTW: ZFS is based on the COW ideas I made in 1988 and the NetApp patents are also just based on my master thesis without giving me credit ;-)
Jorg,
Given your expertise then, can you say how mature/stable/usable is ZFS on Linux, specifically CentOS? That's what everybody is probably most interested in.
Nux! nux@li.nux.ro wrote:
On 04.08.2012 20:32, Joerg.Schilling@fokus.fraunhofer.de wrote:
Karanbir Singh mail-lists@karan.org wrote:
On 08/04/2012 05:06 PM, Joerg Schilling wrote:
Using BTRFS now is like using ZFS in 2005. ZFS is adult now, BTRFS is not
ZFS is the best I know for filesystems >= 2 TB and in case you need flexible snapshots. ZFS has just one single problem, it is slow in case you ask it to verify a stable FS state, UFS is much faster here, but this ZFS "problem" is true for all filesystems on Linux because of the implementation of the Linux buffer cache.
Given your expertise then, can you say how mature/stable/usable is ZFS on Linux, specifically CentOS? That's what everybody is probably most interested in.
ZFS is stable on FreeBSD since aprox. 3 years.
ZFS itself is also stable.
I cannot speak for the stability of Linux, but I've read that there is a group that works on a ZFS integration. The problem in this area is that Linux comes with a very limited VFS interface and porters would either need to reduce ZFS functionality or ignore the VFS interface from Linux.
Jörg
thank you very much. what do you think abou jfs?? is it comparable with others?? ——— Ashkan R On Aug 5, 2012 12:02 AM, "Joerg Schilling" < Joerg.Schilling@fokus.fraunhofer.de> wrote:
Karanbir Singh mail-lists@karan.org wrote:
On 08/04/2012 05:06 PM, Joerg Schilling wrote:
Using BTRFS now is like using ZFS in 2005. ZFS is adult now, BTRFS is not
Can you quantify this in an impartial format as relevant to CentOS ? At the moment your statement is just a rant, and having come across your work in the past, I know you can do better than this.
I would not call it a rant but a food for thought.
ZFS was distributed to the public after it turned 4. ZFS is now in public use since more than 7 years.
What is the age of BTRFS?
The experience with various filesystems tells that it takes 8-10 years to make a new filesystem mature.
Also the OP did not ask for CentOS, but for a filesystem comparison.
So comparing filesystems seems to be the question. For ZFS, I know that it took until three years ago to get rid of nasty bugs. At that time, ZFS was 8.
So be careful with BTRFS until it was in wide use for at least 4 years.
ZFS is the best I know for filesystems >= 2 TB and in case you need flexible snapshots. ZFS has just one single problem, it is slow in case you ask it to verify a stable FS state, UFS is much faster here, but this ZFS "problem" is true for all filesystems on Linux because of the implementation of the Linux buffer cache.
And BTW: ZFS is based on the COW ideas I made in 1988 and the NetApp patents are also just based on my master thesis without giving me credit ;-)
There are few fs use cases where COW is not the best.
Jörg
-- EMail:joerg@schily.isdn.cs.tu-berlin.de (home) Jörg Schilling D-13353 Berlin js@cs.tu-berlin.de (uni) joerg.schilling@fokus.fraunhofer.de (work) Blog: http://schily.blogspot.com/ URL: http://cdrecord.berlios.de/private/ ftp://ftp.berlios.de/pub/schily _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sat, Aug 4, 2012 at 4:48 PM, ashkab rahmani ashkan82r@gmail.com wrote:
thank you very much. what do you think abou jfs?? is it comparable with others??
I was very pro-JFS... until I lost 10gig of very important data, and back then (2002) there was no way to recover a JFS volume (the data was in RAID, but some corruption ocurred and I lost the whole drive, I mean, I ended up with a blank root).
Back in 2004 I asked one of the IBMers at the JFS team about it and he had this to say:
----- "IBM will continue to invest in jfs as long as we feel that our customers get value from it."
Q: Will JFS be enhanced eventually with features from ReiserFS 4? (can it be done without a complete rewrite?).
"Possibly some. Samba has been asking for streams support for a while, and if reiser4 leads the way in an implementation that does not break unix file semantics, jfs (and possibly other file systems) may follow."
-----
Dunno if IBM did much to JFS after that... haven´t been following their work wrt JFS...
FC
On 08/04/12 8:26 PM, Fernando Cassia wrote:
Dunno if IBM did much to JFS after that... haven´t been following their work wrt JFS...
JFS is the primary file system for AIX on their big Power servers, and on those, it performs very very well. the utilities are are fully integrated so growing a file system is a one step process that takes care of both the LVM and JFS online in a single command.
# chfs -size=+10G /home
hard to be much simpler than that!
John R Pierce pierce@hogranch.com wrote:
integrated so growing a file system is a one step process that takes care of both the LVM and JFS online in a single command.
# chfs -size=+10G /home
hard to be much simpler than that!
ZFS is simpler than that ;-)
If you enabled the zpool autoexpand feature, you just need to start replacing disks by bigger ones. Once you are ready with replacing all disks from a RAID system, the filesytem shows the new size.
BTW: where do you expect the additional 10G to come from in your example?
Jörg
On 08/05/12 3:40 AM, Joerg Schilling wrote:
ZFS is simpler than that ;-)
well aware, I run ZFS on Solaris.
BTW: where do you expect the additional 10G to come from in your example?
from the LVM pool containing /home ... Linux LVM also came from IBM, and was based on the LVM of AIX
On 08/05/2012 07:14 PM, John R Pierce wrote:
from the LVM pool containing /home ... Linux LVM also came from IBM, and was based on the LVM of AIX
AIX had a LogicalVolume Manager, sure - but I dont think thats where the linux LVM came from - the Sistina guys had a fairly independent implementation. And the Linux LVM looks a lot more like the HP variant than the IBM one.
- KB
On Sun, Aug 5, 2012 at 3:33 PM, Karanbir Singh mail-lists@karan.org wrote:
AIX had a LogicalVolume Manager, sure - but I dont think thats where the linux LVM came from - the Sistina guys had a fairly independent implementation. And the Linux LVM looks a lot more like the HP variant than the IBM one.
And all LVM implementations become obsolete with Btrfs ;-P
http://www.youtube.com/watch?v=hxWuaozpe2I
FC
On 08/05/2012 07:40 PM, Fernando Cassia wrote:
AIX had a LogicalVolume Manager, sure - but I dont think thats where the linux LVM came from - the Sistina guys had a fairly independent implementation. And the Linux LVM looks a lot more like the HP variant than the IBM one.
And all LVM implementations become obsolete with Btrfs ;-P
you seem confused about what a filesystem and volume management is.
On Sun, Aug 5, 2012 at 3:53 PM, Karanbir Singh mail-lists@karan.org wrote:
you seem confused about what a filesystem and volume management is.
http://www.funtoo.org/wiki/BTRFS_Fun
---- Btrfs, often compared to ZFS, is offering some interesting features like: (snip) Built-in storage pool capabilities (no need for LVM) ----
FC
XFS: Recent and Future Adventures in Filesystem Scalability - Dave Chinner Uploaded by linuxconfau2012 on Jan 19, 2012 http://www.youtube.com/watch?NR=1&feature=endscreen&v=FegjLbCnoBw
---~~.~~--- Mike // SilverTip257 //
On Sun, Aug 5, 2012 at 2:40 PM, Fernando Cassia fcassia@gmail.com wrote:
On Sun, Aug 5, 2012 at 3:33 PM, Karanbir Singh mail-lists@karan.org wrote:
AIX had a LogicalVolume Manager, sure - but I dont think thats where the linux LVM came from - the Sistina guys had a fairly independent implementation. And the Linux LVM looks a lot more like the HP variant than the IBM one.
And all LVM implementations become obsolete with Btrfs ;-P
http://www.youtube.com/watch?v=hxWuaozpe2I
FC _______________________________________________ CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
On Sun, Aug 5, 2012 at 9:25 PM, SilverTip257 silvertip257@gmail.com wrote:
Recent and Future Adventures in Filesystem Scalability - Dave Chinner
Thanks for that vid!
FC
On 08/05/12 11:33 AM, Karanbir Singh wrote:
And the Linux LVM looks a lot more like the HP variant than the IBM one.
ah, you're right, googing and wikipedia says, the linux implementation was based on HPUX, I was mistaken thinking IBM had provided their LVM code.
On Sun, Aug 5, 2012 at 12:32 AM, John R Pierce pierce@hogranch.com wrote:
JFS is the primary file system for AIX on their big Power servers, and on those, it performs very very well. the utilities are are fully integrated so growing a file system is a one step process that takes care of both the LVM and JFS online in a single command.
Yes, however my data loss experience was with IBM´s OS/2 port of JFS. Probably related to one of these http://www.os2voice.org/warpcast/1999-08/37CC5F9D.htm
Needless to say I learned the hard way that filesystems can be buggy. ;) FC FC
On 08/05/2012 06:46 PM, Fernando Cassia wrote:
Yes, however my data loss experience was with IBM´s OS/2 port of JFS. Probably related to one of these http://www.os2voice.org/warpcast/1999-08/37CC5F9D.htm
I think its safe to assume that OS/2 experience from 1998 is pretty much irrelevant to the conversation here, and JFS on linux. Just looking at the kernel tree its easy to spot plenty of changes in that part of the codebase.
There is jfs support available for CentOS - feel free to install it and quantify issues around it.
On Sun, Aug 5, 2012 at 3:38 PM, Karanbir Singh mail-lists@karan.org wrote:
I think its safe to assume that OS/2 experience from 1998 is pretty much irrelevant to the conversation here, and JFS on linux
My data loss was in 2002. :-p
You are putting words in my mouth. Re-read what I posted before you jump to conclusions. I also do not like your patronising tone.
FC
Fernando Cassia fcassia@gmail.com wrote:
"Possibly some. Samba has been asking for streams support for a while, and if reiser4 leads the way in an implementation that does not break unix file semantics, jfs (and possibly other file systems) may follow."
Microsoft tried to advertize their "stream" concept to POSIX in summer 2001.
They failed because they used a userinterface that is in conflict with POSIX rules (e.g. by forbidding ':' to be a normal character in filenames or by trying to introduce a ew special directory "...").
In August 2001, Sun came up with the extended attribute file concept that is a superset of the Microsoft stream concept and in addition compatile to POSIX.
In August 2001, the implementation was only usable on UFS outside from Sun, but it was implemented in ZFS from the beginning.
Given the fact that the extended attribute file concept is part of the NFSv4 standard, Linux should implement it in case it offers a full blown NFSv4. I am not sure whether this applies to Linux, as my last information say that support for NFSv4 ACLs is also missing on Linux. Note that NFSv4 ACLs are bitwise identical to NTFS ACLs and to ZFS ACLs.
But if you like to offer SMB exports, you should better use Solaris as Solaris comes with an in-kernel SMB server that supports all features from SMB. This includes support for atomar ACL create support that can only be supported with an enhanced VFS interface.
Jörg
On Sat, Aug 4, 2012 at 4:32 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
What is the age of BTRFS?
BTRFS presentation, mid-2007 https://oss.oracle.com/projects/btrfs/dist/documentation/btrfs-ukuug.pdf
That makes it 6 years in development. Next...
FC
Fernando Cassia fcassia@gmail.com wrote:
On Sat, Aug 4, 2012 at 4:32 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
What is the age of BTRFS?
BTRFS presentation, mid-2007 https://oss.oracle.com/projects/btrfs/dist/documentation/btrfs-ukuug.pdf
That makes it 6 years in development. Next...
So BTRFS is 6 years younger than ZFs.
Comparing todays's bugreports for BTRFS with ZFS bugreports from 2006 let me asume that you should not consider to use BTRFS during the next 2-3 years in real production.
Jörg
On Sat, Aug 4, 2012 at 4:32 PM, Joerg Schilling Joerg.Schilling@fokus.fraunhofer.de wrote:
So be careful with BTRFS until it was in wide use for at least 4 years.
FUD alert...
https://events.linuxfoundation.org/events/linuxcon-japan/bo --- LinuxCon Japan 2012 | Presentations On The Way to a Healthy Btrfs Towards Enterprise
Btrfs has been on full development for about 5 years and it does make lots of progress on both feature and performance, but why does everybody keep tagging it with ""experimental""? And why do people still think of it as a vulnerable one for production use? As a goal of production use, we have been strengthening several features, making improvements on performance and keeping fixing bugs to make btrfs stable, for instance, ""snapshot aware defrag"", ""extent buffer cache"", ""rbtree lock contention"", etc. This talk will cover the above and will also show problems we are facing with, solutions we are seeking for and a blueprint we are planning to lay out. For this session, I'll focus on its features and performance, so for the target audience, it'd be better to have a basic knowledge base of filesystem.
Liu Bo, Fujitsu
Liu Bo has been working on linux kernel development since late 2010 as a Fujitsu engineer. He has been working on filesystem field and he's now focusing on btrfs development. ----
FC
On 08/04/2012 08:32 PM, Joerg Schilling wrote:
I would not call it a rant but a food for thought.
agreed!
ZFS was distributed to the public after it turned 4. ZFS is now in public use since more than 7 years.
but ZFS has not had a stable release in Linux as yet, making it still be negative in years. The codebase is likely to take a lot longer to get into a stable status than btrfs.
What is the age of BTRFS?
My personal experience with btrfs is from the 2.8.2x tree, at which point btrfs struggled to notify on disk-full situation. Making it quite academic as to what the maturity state of the system was! Admittedly, things have moved on and at this time btrfs has an exponentially higher contributor and adopter base on Linux than Zfs does.
btw, I dont totally agree that btrfs and zfs are feature identical, so there will always been scope for one over the other in terms of how it fits the problem domain a user might have.
Also the OP did not ask for CentOS, but for a filesystem comparison.
This is the CentOS Linux list :) I think its safe to assume answers here should be directed towards that.
So be careful with BTRFS until it was in wide use for at least 4 years.
So, I'm all for mature and tested systems - always. But given the problem domain, I think its wrong to generalise to that level. Lots of people will use technology on the cutting edge, and lots more people will adopt for feature matching with app domains. I, for one, am grateful to these people for picking up the stuff in its early days and working to find and then even fix issues as they come up.
ZFS is the best I know for filesystems >= 2 TB and in case you need flexible snapshots. ZFS has just one single problem, it is slow in case you ask it to verify a stable FS state, UFS is much faster here, but this ZFS "problem" is true for all filesystems on Linux because of the implementation of the Linux buffer cache.
the other problem with zfs is its large RAM requirements - and extremely poor 32bit support.
There are few fs use cases where COW is not the best.
agreed.
Karanbir Singh mail-lists@karan.org wrote:
On 08/04/2012 08:32 PM, Joerg Schilling wrote:
I would not call it a rant but a food for thought.
agreed!
ZFS was distributed to the public after it turned 4. ZFS is now in public use since more than 7 years.
but ZFS has not had a stable release in Linux as yet, making it still be negative in years. The codebase is likely to take a lot longer to get into a stable status than btrfs.
The ZFS code base is stable, the problem is the VFS interface in Linux and that applies to all filesystems....
So be careful with BTRFS until it was in wide use for at least 4 years.
So, I'm all for mature and tested systems - always. But given the problem domain, I think its wrong to generalise to that level. Lots of people will use technology on the cutting edge, and lots more people will adopt for feature matching with app domains. I, for one, am grateful to these people for picking up the stuff in its early days and working to find and then even fix issues as they come up.
The ZFS developers have been forced by Sun to put their home directoriers on ZFS in 2001. Does this apply to BTRFS too?
ZFS is the best I know for filesystems >= 2 TB and in case you need flexible snapshots. ZFS has just one single problem, it is slow in case you ask it to verify a stable FS state, UFS is much faster here, but this ZFS "problem" is true for all filesystems on Linux because of the implementation of the Linux buffer cache.
the other problem with zfs is its large RAM requirements - and extremely poor 32bit support.
This is not a problem as nobody encourages you to put ZFS into toys or phones. With the typical amout of RAM in today's machines, ZFS is happy.
Jörg
On Mon, Aug 06, 2012 at 12:00:22PM +0200, Joerg Schilling wrote:
Karanbir Singh mail-lists@karan.org wrote:
On 08/04/2012 08:32 PM, Joerg Schilling wrote:
I would not call it a rant but a food for thought.
agreed!
ZFS was distributed to the public after it turned 4. ZFS is now in public use since more than 7 years.
but ZFS has not had a stable release in Linux as yet, making it still be negative in years. The codebase is likely to take a lot longer to get into a stable status than btrfs.
The ZFS code base is stable, the problem is the VFS interface in Linux and that applies to all filesystems....
Hello,
Care to explain what's the problem in Linux VFS layer ?
-- Pasi
Pasi Kärkkäinen pasik@iki.fi wrote:
The ZFS code base is stable, the problem is the VFS interface in Linux and that applies to all filesystems....
Hello,
Care to explain what's the problem in Linux VFS layer ?
The VFS layer was introduced in 1980 by Bill Joy when he started the UFS project (his master thesis).
The VFS layer was enhanced around 1984/1985 to support NFS.
The VFS layer was enhanced again in 1987 to support mmap() and the new momory model from SunOS-4.0 that has been copied by all major OS.
In 1993, VFS was enhanced to support ACLs.
In 2007, VFS was enhanced to support CIFS (e.g. atomic create with ACL, switch into case insensitive mode, ...).
VFS has a 30 year history and offers a clean abstract design. It is even independent from typical changes in the proc or user structures. It uses vnodes instead of inodes to grant a clean abstraction level.
Linux VFS looks not very abstract, it is directly bound to user/proc structures and to Linux I/O. There is no support for NFS (get fid from vnode and get vnode from fid). There is no support for ACLs (let the filesystem decide whether access should be granted).
The deviating I/O model in Linux will cause a lot of work for adaptation....
What about extended attribute files (the superset of Win-DOS streams)?
Jörg
On 08/04/12 7:01 AM, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it?
we are using XFS with CentOS 6.latest on 80TB file systems, works quite well. handles a mix of many tiny files and very large files without any special tuning.
Theres one big issue with NFS that requires a workaround... XFS requires 64 bit inodes on a large file system ('inode64'), and by default, NFS wants to use the inode as the unique ID for the export, this doesn't work as that unique ID has to be 32 bits, so you have to manually specify a unique identifier for each share from a given server. I can't remember offhand what the specific option is, but you can specify 1, 2, 3, 4 for the share identifiers, or any other unique integer. if you only export the root of a file system, tis is not a problem. this problem is squarely an NFS implementation problem, that code should have been fixed eons ago.
One disadvantage I've seen with XFS is that you cannot shrink [0] the file system. For a box dedicated to network storage this shouldn't be a problem. But in my instance I made /var a bit too large and needed to reclaim space for /.
[0] http://xfs.org/index.php/Shrinking_Support
---~~.~~--- Mike // SilverTip257 //
On Sat, Aug 4, 2012 at 4:12 PM, John R Pierce pierce@hogranch.com wrote:
On 08/04/12 7:01 AM, ashkab rahmani wrote:
hello i have 16tb storage. 8x2tb sata raided. i want to share it on network via nfs. which file system is better for it?
we are using XFS with CentOS 6.latest on 80TB file systems, works quite well. handles a mix of many tiny files and very large files without any special tuning.
Theres one big issue with NFS that requires a workaround... XFS requires 64 bit inodes on a large file system ('inode64'), and by default, NFS wants to use the inode as the unique ID for the export, this doesn't work as that unique ID has to be 32 bits, so you have to manually specify a unique identifier for each share from a given server. I can't remember offhand what the specific option is, but you can specify 1, 2, 3, 4 for the share identifiers, or any other unique integer. if you only export the root of a file system, tis is not a problem. this problem is squarely an NFS implementation problem, that code should have been fixed eons ago.
-- john r pierce N 37, W 122 santa cruz ca mid-left coast
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos
John R Pierce pierce@hogranch.com wrote:
Theres one big issue with NFS that requires a workaround... XFS requires 64 bit inodes on a large file system ('inode64'), and by default, NFS wants to use the inode as the unique ID for the export, this doesn't work as that unique ID has to be 32 bits, so you have to manually specify a unique identifier for each share from a given server. I
This is wrong.
Your claim is aproximately correct for NFSv2 (1988) but wrong for other NFS versions.
On NFSv2, te file handle is not able to handle more than 32 bit inode numbers.
NFSv3 changed this (I believe this is from 1990).
Unfortunately, NFSv2 and NFSv3 have been implemented in the same server/client code an thus it was not recommended to use large NFS file handles to retain NFSv2 compatibility.
NFSv4 (since 2004) by default uses large NFS filehandles.
Your problem may be caused by the quality of the NFS code in Linux, so it is worth to make a bug report.
Jörg
On 08/05/12 3:06 AM, Joerg Schilling wrote:
Your claim is aproximately correct for NFSv2 (1988) but wrong for other NFS versions.
The server was using NFS V3/V4 in CentOS 6.2 earlier this year, and various clients, including Solaris 10. The problems were reported from our overseas manufacturing operations so I only got them 3rd hand, and don't know all the specifics. In my lab I had only shared the root of the file system as thats the model I use, but apparently operations likes to have lots of different shares, MS Windows style. This was a 'stop production' kind of error, so the most expedient fix was to manually specify the export ID.
John R Pierce pierce@hogranch.com wrote:
On 08/05/12 3:06 AM, Joerg Schilling wrote:
Your claim is aproximately correct for NFSv2 (1988) but wrong for other NFS versions.
The server was using NFS V3/V4 in CentOS 6.2 earlier this year, and various clients, including Solaris 10. The problems were reported from our overseas manufacturing operations so I only got them 3rd hand, and don't know all the specifics. In my lab I had only shared the root of the file system as thats the model I use, but apparently operations likes to have lots of different shares, MS Windows style. This was a 'stop production' kind of error, so the most expedient fix was to manually specify the export ID.
If you suffer from bugs in Linux filesystem implementations, you should make a bug report against the related code. Only a bug report ans a willing maintainer can help you.
The problem you describe does not exist on Solaris nor on other systems with bug-free NFS and I know why I try to avoid Linux when NFS is important. It is a pity that after many years, there are still NFS problems in Linux.
Again:
- NFSv2 (from 1988) allows 32 Bytes for a NFS file handle
- NFSv3 (from 1990) allows 64 Bytes for a NFS file handle
- NFSv4 (from 2004) has no hard limit here
With the 32 byte file handle, there are still 12 bytes (including a 2 byte length indicator) for the file id in the file handle.
If your filesystem could use 44 and more bytes in the case you describe, there is no problem - except when the code is not OK.
It is of course nice to still support SunOS-4.0 clients, but in case that the client supports NFSv3 or newer, why not use longer file id's?
Jörg
On 08/05/12 3:18 PM, Joerg Schilling wrote:
John R Pierce pierce@hogranch.com wrote:
On 08/05/12 3:06 AM, Joerg Schilling wrote:
Your claim is aproximately correct for NFSv2 (1988) but wrong for other NFS versions.
The server was using NFS V3/V4 in CentOS 6.2 earlier this year, and various clients, including Solaris 10. The problems were reported from our overseas manufacturing operations so I only got them 3rd hand, and don't know all the specifics. In my lab I had only shared the root of the file system as thats the model I use, but apparently operations likes to have lots of different shares, MS Windows style. This was a 'stop production' kind of error, so the most expedient fix was to manually specify the export ID.
If you suffer from bugs in Linux filesystem implementations, you should make a bug report against the related code. Only a bug report ans a willing maintainer can help you.
The problem you describe does not exist on Solaris nor on other systems with bug-free NFS and I know why I try to avoid Linux when NFS is important. It is a pity that after many years, there are still NFS problems in Linux.
Again:
NFSv2 (from 1988) allows 32 Bytes for a NFS file handle
NFSv3 (from 1990) allows 64 Bytes for a NFS file handle
NFSv4 (from 2004) has no hard limit here
With the 32 byte file handle, there are still 12 bytes (including a 2 byte length indicator) for the file id in the file handle.
If your filesystem could use 44 and more bytes in the case you describe, there is no problem - except when the code is not OK.
It is of course nice to still support SunOS-4.0 clients, but in case that the client supports NFSv3 or newer, why not use longer file id's?
we had both solaris 10 aka sunos 5.10 clients and EL5/6 clients. the error is "Stale NFS file handle"
anyways, this refers to the fsid problem, http://xfs.org/index.php/XFS_FAQ#Q:_Why_doesn.27t_NFS-exporting_subdirectori...
I discussed this problem on this list back in march, and got little useful feedback
I see related issues here. http://oss.sgi.com/archives/xfs/2009-11/msg00161.html so this problem has been known for awhile.
we were unable to make the 'fsid=uuid' option work (or we didn't understand it), but using fsid=## for unique integers for each export works fine, so thats what we went with.
are these fsid's the same as your 32 vs 64 bit file handles ? doesn't sound like it to me, unless I'm misunderstanding what you're referring to as a file handle.
John R Pierce pierce@hogranch.com wrote:
Again:
NFSv2 (from 1988) allows 32 Bytes for a NFS file handle
NFSv3 (from 1990) allows 64 Bytes for a NFS file handle
NFSv4 (from 2004) has no hard limit here
With the 32 byte file handle, there are still 12 bytes (including a 2 byte length indicator) for the file id in the file handle.
If your filesystem could use 44 and more bytes in the case you describe, there is no problem - except when the code is not OK.
It is of course nice to still support SunOS-4.0 clients, but in case that the client supports NFSv3 or newer, why not use longer file id's?
we had both solaris 10 aka sunos 5.10 clients and EL5/6 clients. the error is "Stale NFS file handle"
So the bug is in the NFS server code.
A NFS file handle is made of a filesystem ID, a file ID for the export directory and a file ID for the current file.
A file ID is (for a writable filesystem) typically made of the inode number and a file generation number that is incremented every time a destructive operation on the file appears.
"Stale NFS file handle" is the error code if the referred file (inode number + file generation number) no longer exists.
I asume that in your case, the server did send out invalid file IDs that do not point to a valid file. For this reason, the client gets a "Stale NFS file handle" when it uses the NFS file handle returned by the NFS server for a open() operation.
anyways, this refers to the fsid problem, http://xfs.org/index.php/XFS_FAQ#Q:_Why_doesn.27t_NFS-exporting_subdirectori...
This seems to be an "explanation" from the people who wrote the non-working NFS code.
we were unable to make the 'fsid=uuid' option work (or we didn't understand it), but using fsid=## for unique integers for each export works fine, so thats what we went with.
are these fsid's the same as your 32 vs 64 bit file handles ? doesn't sound like it to me, unless I'm misunderstanding what you're referring to as a file handle.
If you introduce new words, you would need to explain them. I can only explain how NFS works and that it is possible to use NFS to export filesystems with more than 32 bit inode numbers.
Just a note: even with NFSv2, you could have an 8 Byte inode number + 2 Byte file generation number or 7 Byte inode number + 3 byte file generation number or 6 Byte inode number + 4 Byte file generation number.
It is most unlikely that a filesystem designed for NFSv2 export will use the full 8 byte address space that 64 bit inode number could allow.
Jörg
On 08/04/2012 07:01 AM, ashkab rahmani wrote:
i want to share it on network via nfs. which file system is better for it?
I have a hard time imagining that you'd get useful information from cross-posting this to the FreeBSD and CentOS lists. Their implementations of filesystems are completely different.
If you use a CentOS NFS server, I'd recommend ext4. In benchmarks, I often see XFS perform better than ext4 in specific tests. JFS rarely does well. However, the last time I used XFS was for a system running Zoneminder. Benchmarks let us to believe that XFS would be a better filesystem, but in our actual implementation it couldn't keep up. The application couldn't delete data quickly enough when the volume was nearly full, so the volume would fill up and the system would fail. We only got it working reliably after switching to ext4.
Gordon Messmer yinyang@eburg.com wrote:
On 08/04/2012 07:01 AM, ashkab rahmani wrote:
i want to share it on network via nfs. which file system is better for it?
I have a hard time imagining that you'd get useful information from cross-posting this to the FreeBSD and CentOS lists. Their implementations of filesystems are completely different.
If you use a CentOS NFS server, I'd recommend ext4. In benchmarks, I
Whatever you do, don't use bemchmarks to compare Linux with other OS, Linux cheats.
A benchmark tries to do something completed and then get the time for that but I've seen Linux to try it's best to prevent this completed set of actions to happen withing a known time.
If I unpack the linux kernel sources on Solaris and UFS using star (that by default calls fsync() for each file) everything is stable on disk when star finishes.
If you do the same on Linux/ext3 with gnu tar, gtar finishes 10-30% faster than star does on Solaris but on Linux almost nothing is on disk at that time.
If you like to compare, I recommend to use gtar or star -no-fsync to unpack the linux kernel sources and to pull the mains plug after the tar extract seems to be ready (the next prompt is displayed). Then reboot and compare what you have on disk an whether the filesystem on disk is still usable.
Jörg
Whatever you do, don't use bemchmarks to compare Linux with other OS, Linux cheats.
A benchmark tries to do something completed and then get the time for that but I've seen Linux to try it's best to prevent this completed set of actions to happen withing a known time.
If I unpack the linux kernel sources on Solaris and UFS using star (that by default calls fsync() for each file) everything is stable on disk when star finishes.
A little data never hurts. Even if the numbers mean little.
test 1 - Debian Linux 6.0.5 on x86_64
root@tfs01:/usr/local# uname -a Linux tfs01 2.6.32-5-amd64 #1 SMP Sun May 6 04:00:17 UTC 2012 x86_64 GNU/Linux root@tfs01:/usr/local# cat /etc/debian_version 6.0.5
root@tfs01:/usr/local# cd src root@tfs01:/usr/local/src# ls -a . .. root@tfs01:/usr/local/src# root@tfs01:/usr/local/src# wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.5.1.tar.bz2 --2012-08-12 20:39:18-- http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.5.1.tar.bz2 Resolving www.kernel.org... 149.20.4.69, 149.20.20.133 Connecting to www.kernel.org|149.20.4.69|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 80981090 (77M) [application/x-bzip2] Saving to: `linux-3.5.1.tar.bz2'
100%[======================================>] 80,981,090 871K/s in 92s
2012-08-12 20:40:50 (861 KB/s) - `linux-3.5.1.tar.bz2' saved [80981090/80981090]
root@tfs01:/usr/local/src#
root@tfs01:/usr/local/src# time -p openssl dgst -sha256 linux-3.5.1.tar.bz2 SHA256(linux-3.5.1.tar.bz2)= 78f8553c7cbc09d9c12d45c7f6ec82f964bf35d324832bede273ba2f6fe3e3ed real 0.85 user 0.78 sys 0.02
CPU type :
root@tfs01:/usr/local/src# grep -E "^processor|^vendor_id|^model\ name" /proc/cpuinfo processor : 0 vendor_id : AuthenticAMD model name : AMD Opteron(TM) Processor 6272 processor : 1 vendor_id : AuthenticAMD model name : AMD Opteron(TM) Processor 6272 processor : 2 vendor_id : AuthenticAMD model name : AMD Opteron(TM) Processor 6272 processor : 3 vendor_id : AuthenticAMD model name : AMD Opteron(TM) Processor 6272
root@tfs01:/usr/local/src# mkdir foo root@tfs01:/usr/local/src# cd foo root@tfs01:/usr/local/src/foo# /opt/schily/bin/star -x -bz -xdir -xdot -U -fs=64m -fifostats -time file=../linux-3.5.1.tar.bz2 ; find . -type f | wc -l star: fifo had 46849 puts 80626 gets. star: fifo was 1 times empty and 12 times full. star: fifo held 67112960 bytes max, size was 67112960 bytes star: fifo had 5 moves, total of 25600 moved bytes star: fifo is 0% full (2k), size 65540k. star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 73.528sec (6371 kBytes/sec) 39096
root@tfs01:/usr/local/src/foo# cd .. root@tfs01:/usr/local/src# time -p rm -rf foo real 0.97 user 0.01 sys 0.94 root@tfs01:/usr/local/src# mkdir doo root@tfs01:/usr/local/src# cd doo root@tfs01:/usr/local/src/doo# which tar /bin/tar
root@tfs01:/usr/local/src/doo# time -p tar --use-compress-program /bin/bzip2 -xf ../linux-3.5.1.tar.bz2 ; find . -type f | wc -l real 25.14 user 24.03 sys 4.48 39096
Let's try that without compression involved.
root@tfs01:/usr/local/src# bunzip2 linux-3.5.1.tar.bz2 root@tfs01:/usr/local/src# root@tfs01:/usr/local/src# ls -lApb total 468956 -rw-r--r-- 1 root staff 479733760 Aug 9 15:44 linux-3.5.1.tar root@tfs01:/usr/local/src# mkdir foo root@tfs01:/usr/local/src# cd foo root@tfs01:/usr/local/src/foo# time -p /opt/schily/bin/star -x -xdir -xdot -U -fs=64m -fifostats -time file=../linux-3.5.1.tar star: fifo had 46849 puts 80626 gets. star: fifo was 1 times empty and 19 times full. star: fifo held 67112960 bytes max, size was 67112960 bytes star: fifo had 5 moves, total of 25600 moved bytes star: fifo is 0% full (2k), size 65540k. star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 73.252sec (6395 kBytes/sec) real 73.33 user 0.53 sys 5.77
root@tfs01:/usr/local/src/foo# cd .. root@tfs01:/usr/local/src# rm -rf foo root@tfs01:/usr/local/src# mkdir doo root@tfs01:/usr/local/src# cd doo root@tfs01:/usr/local/src/doo# time -p tar -xf ../linux-3.5.1.tar real 3.13 user 0.22 sys 2.80
3 seconds ? I find that a tad hard to believe.
test 2 - Solaris 10 on HP Server x86_64 with ZFS
$ uname -a SunOS foxtrot 5.10 Generic_147441-19 i86pc i386 i86pc $ cat /etc/release Oracle Solaris 10 8/11 s10x_u10wos_17b X86 Copyright (c) 1983, 2011, Oracle and/or its affiliates. All rights reserved. Assembled 23 August 2011
$ zpool list NAME SIZE ALLOC FREE CAP HEALTH ALTROOT foxtrot_rpool 1.62T 66.1G 1.56T 3% ONLINE -
$ zpool get all foxtrot_rpool NAME PROPERTY VALUE SOURCE foxtrot_rpool size 1.62T - foxtrot_rpool capacity 3% - foxtrot_rpool altroot - default foxtrot_rpool health ONLINE - foxtrot_rpool guid 14156136675261437976 default foxtrot_rpool version 29 default foxtrot_rpool bootfs foxtrot_rpool/ROOT/s10x_u10wos_17b local foxtrot_rpool delegation on default foxtrot_rpool autoreplace off default foxtrot_rpool cachefile - default foxtrot_rpool failmode continue local foxtrot_rpool listsnapshots on default foxtrot_rpool autoexpand off default foxtrot_rpool free 1.56T - foxtrot_rpool allocated 66.1G - foxtrot_rpool readonly off -
$ psrinfo -pv The physical processor has 12 virtual processors (0 2 4 6 8 10 12 14 16 18 20 22) x86 (chipid 0x0 GenuineIntel family 6 model 44 step 2 clock 3467 MHz) Intel(r) Xeon(r) CPU X5690 @ 3.47GHz The physical processor has 12 virtual processors (1 3 5 7 9 11 13 15 17 19 21 23) x86 (chipid 0x1 GenuineIntel family 6 model 44 step 2 clock 3467 MHz) Intel(r) Xeon(r) CPU X5690 @ 3.47GHz
$ /usr/sfw/bin/wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.5.1.tar.bz2 --2012-08-12 17:09:38-- http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.5.1.tar.bz2 Resolving www.kernel.org... 149.20.20.133, 149.20.4.69 Connecting to www.kernel.org|149.20.20.133|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 80981090 (77M) [application/x-bzip2] Saving to: `linux-3.5.1.tar.bz2'
100%[======================================>] 80,981,090 605K/s in 2m 13s
2012-08-12 17:11:51 (596 KB/s) - `linux-3.5.1.tar.bz2' saved [80981090/80981090]
$ ptime digest -a sha256 linux-3.5.1.tar.bz2 78f8553c7cbc09d9c12d45c7f6ec82f964bf35d324832bede273ba2f6fe3e3ed
real 0.552 user 0.496 sys 0.039
Strangely the openssl in Solaris 10 is SHA256 ignorant.
However one may use the mdigest in schilytools also :
$ ptime /opt/schily/bin/mdigest -a sha256 linux-3.5.1.tar.bz2 78f8553c7cbc09d9c12d45c7f6ec82f964bf35d324832bede273ba2f6fe3e3ed linux-3.5.1.tar.bz2
real 0.576 user 0.562 sys 0.014
Regardless .. let's do a test here :
$ mkdir foo $ cd foo $ ptime /opt/schily/bin/star -x -bz -xdir -xdot -U -fs=64m -fifostats -time file=../linux-3.5.1.tar.bz2 star: fifo had 46849 puts 80641 gets. star: fifo was 14 times empty and 0 times full. star: fifo held 22493696 bytes max, size was 67112960 bytes star: fifo had 5 moves, total of 25600 moved bytes star: fifo is 0% full (2k), size 65540k. star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 16.234sec (28858 kBytes/sec)
real 16.238 user 15.890 sys 3.738
There is also a GNU tar somewhere in Solaris 10 :
$ /usr/sfw/bin/gtar --version tar (GNU tar) 1.26 Copyright (C) 2011 Free Software Foundation, Inc. License GPLv3+: GNU GPL version 3 or later http://gnu.org/licenses/gpl.html. This is free software: you are free to change and redistribute it. There is NO WARRANTY, to the extent permitted by law.
Written by John Gilmore and Jay Fenlason. $ which bzip2 /usr/bin/bzip2
$ cd .. $ rm -rf foo $ mkdir doo $ cd doo
$ ptime /usr/sfw/bin/gtar --use-compress-program /usr/bin/bzip2 -xf ../linux-3.5.1.tar.bz2
real 16.597 user 16.484 sys 2.500 $
Also if we avoid the compression :
$ cd .. $ rm -rf doo $ cd .. $ rm -rf doo $ $ ls linux-3.5.1.tar.bz2 $ bunzip2 linux-3.5.1.tar.bz2 $ mkdir foo $ cd foo $ ptime /opt/schily/bin/star -x -xdir -xdot -U -fs=64m -fifostats -time file=../linux-3.5.1.tar star: fifo had 46849 puts 80626 gets. star: fifo was 1 times empty and 18 times full. star: fifo held 67112448 bytes max, size was 67112960 bytes star: fifo had 5 moves, total of 25600 moved bytes star: fifo is 0% full (2k), size 65540k. star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 7.099sec (65993 kBytes/sec)
real 7.103 user 0.314 sys 3.376
$ cd .. $ rm -rf foo $ mkdir doo $ cd doo $ ptime /usr/sfw/bin/gtar -xf ../linux-3.5.1.tar
real 2.579 user 0.280 sys 2.165 $ cd .. $ rm -rf doo
TEST 3 - Solaris 10 on UltraSparc III and UFS filesystem
jupiter-sparc-SunOS5.10 # uname -a SunOS jupiter 5.10 Generic_147440-20 sun4u sparc SUNW,Sun-Fire-480R
jupiter-sparc-SunOS5.10 # cat /etc/release Solaris 10 5/08 s10s_u5wos_10 SPARC Copyright 2008 Sun Microsystems, Inc. All Rights Reserved. Use is subject to license terms. Assembled 24 March 2008
jupiter-sparc-SunOS5.10 # psrinfo -pv The physical processor has 1 virtual processor (0) UltraSPARC-III+ (portid 0 impl 0x15 ver 0x23 clock 900 MHz) The physical processor has 1 virtual processor (1) UltraSPARC-III+ (portid 1 impl 0x15 ver 0x23 clock 900 MHz) The physical processor has 1 virtual processor (2) UltraSPARC-III+ (portid 2 impl 0x15 ver 0x23 clock 900 MHz) The physical processor has 1 virtual processor (3) UltraSPARC-III+ (portid 3 impl 0x15 ver 0x23 clock 900 MHz)
jupiter-sparc-SunOS5.10 # df -F ufs -h Filesystem Size Used Available Capacity Mounted on /dev/md/dsk/d0 16G 9.4G 6.2G 61% / /dev/md/dsk/d3 7.8G 4.9G 2.6G 66% /var
jupiter-sparc-SunOS5.10 # pwd /usr/local/src jupiter-sparc-SunOS5.10 # ls -a . ..
jupiter-sparc-SunOS5.10 # /usr/sfw/bin/wget http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.5.1.tar.bz2 --2012-08-12 22:03:01-- http://www.kernel.org/pub/linux/kernel/v3.0/linux-3.5.1.tar.bz2 Resolving www.kernel.org... 149.20.20.133, 149.20.4.69 Connecting to www.kernel.org|149.20.20.133|:80... connected. HTTP request sent, awaiting response... 200 OK Length: 80981090 (77M) [application/x-bzip2] Saving to: `linux-3.5.1.tar.bz2'
0K ........ ........ ........ ........ ........ ........ 3% 461K 2m45s 3072K ........ ........ ........ ........ ........ ........ 7% 476K 2m36s 73728K ........ ........ ........ ........ ........ ........ 97% 510K 5s 76800K ........ ........ ........ ........ ... 100% 512K=2m36s
2012-08-12 22:05:38 (505 KB/s) - `linux-3.5.1.tar.bz2' saved [80981090/80981090]
jupiter-sparc-SunOS5.10 # pwd /usr/local/src
jupiter-sparc-SunOS5.10 # ptime digest -a sha256 linux-3.5.1.tar.bz2 78f8553c7cbc09d9c12d45c7f6ec82f964bf35d324832bede273ba2f6fe3e3ed
real 3.699 user 3.286 sys 0.264 jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/mdigest -a sha256 linux-3.5.1.tar.bz2 78f8553c7cbc09d9c12d45c7f6ec82f964bf35d324832bede273ba2f6fe3e3ed linux-3.5.1.tar.bz2
real 3.136 user 2.964 sys 0.125 jupiter-sparc-SunOS5.10 # mkdir foo jupiter-sparc-SunOS5.10 # cd foo jupiter-sparc-SunOS5.10 # pwd /usr/local/src/foo jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/star -x -bz -xdir -xdot -U -fs=64m -fifostats -time file=../linux-3.5.1.tar.bz2 star: fifo had 46849 puts 80626 gets. star: fifo was 1 times empty and 17 times full. star: fifo held 67109888 bytes max, size was 67112960 bytes star: fifo had 5 moves, total of 25600 moved bytes star: fifo is 0% full (2k), size 65540k. star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 955.454sec (490 kBytes/sec)
real 15:55.612 user 1:01.947 sys 31.922
jupiter-sparc-SunOS5.10 # cd .. jupiter-sparc-SunOS5.10 # ptime rm -rf foo
real 2:14.021 user 0.501 sys 7.249
jupiter-sparc-SunOS5.10 # mkdir doo jupiter-sparc-SunOS5.10 # cd doo jupiter-sparc-SunOS5.10 # ptime /usr/sfw/bin/gtar --use-compress-program /usr/bin/bzip2 -xf ../linux-3.5.1.tar.bz2
real 4:40.456 user 1:04.095 sys 19.713
jupiter-sparc-SunOS5.10 # cd .. jupiter-sparc-SunOS5.10 # ptime rm -rf doo
real 2:14.808 user 0.502 sys 7.235
jupiter-sparc-SunOS5.10 # ptime bunzip2 linux-3.5.1.tar.bz2
real 1:05.220 user 1:00.120 sys 3.073
jupiter-sparc-SunOS5.10 # mkdir foo jupiter-sparc-SunOS5.10 # cd foo jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/star -x -xdir -xdot -U -fs=64m -fifostats -time file=../linux-3.5.1.tar star: fifo had 46849 puts 80626 gets. star: fifo was 1 times empty and 19 times full. star: fifo held 67110912 bytes max, size was 67112960 bytes star: fifo had 5 moves, total of 25600 moved bytes star: fifo is 0% full (2k), size 65540k. star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 930.722sec (503 kBytes/sec)
real 15:31.002 user 1.802 sys 29.512
jupiter-sparc-SunOS5.10 # ptime rm -rf foo
real 2:15.059 user 0.502 sys 7.177
jupiter-sparc-SunOS5.10 # mkdir doo jupiter-sparc-SunOS5.10 # cd doo jupiter-sparc-SunOS5.10 # ptime /usr/sfw/bin/gtar -xf ../linux-3.5.1.tar
real 3:57.001 user 2.065 sys 18.466 jupiter-sparc-SunOS5.10 #
So there we have some numbers.
Dennis
Dennis Clarke dclarke@blastwave.org wrote:
A little data never hurts. Even if the numbers mean little.
test 1 - Debian Linux 6.0.5 on x86_64
Given the fact, that you did not run star -no-fifo, you compare an insecure implementation (gtar never calls fsync(2)) with a secure by default implementation (star).
Also note: ZFS has one real problem: it is really slow if you force it to grant a stable sate on the medium. This is why ZFS is aprox. 4x slower in your test star (without -no-fsync) compared to gtar.
ext3 is slow with star (without -no-fsync) because ext3 it is not optimized. ext3 is fast with gtar because it cheats.
Solaris with UFS:
bzip2 -d < /tmp/linux-3.5.1.tar.bz2 > /dev/null 19.909r 19.770u 0.110s 99% 0M 0+0k 0st 0+0io 0pf+0w
OK, 19.77 seconds user CPU time.....
star -xp -xdot -time < /tmp/linux-3.5.1.tar.bz2 star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 70.477sec (6647 kBytes/sec) 1:10.489r 23.020u 9.010s 45% 0M 0+0k 0st 0+0io 0pf+0w
star -xp -xdot -time -no-fsync < /tmp/linux-3.5.1.tar.bz2 star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 74.161sec (6317 kBytes/sec) 1:14.174r 21.840u 4.640s 35% 0M 0+0k 0st 0+0io 0pf+0w
Only half of the System CPU time here because fsync(2) calls are missing...
gtar --totals -xf /tmp/linux-3.5.1.tar.bz2 Gesamtzahl gelesener Bytes: 479733760 (458MiB, 4,5MiB/s) 1:42.658r 23.150u 5.530s 27% 0M 0+0k 0st 0+0io 0pf+0w
gtar does not have a FIFO and thus is slower than star.
Now uncompressed (you see that the user CPU time for bzip2 is missing):
star -xp -xdot -time < /tmp/linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 70.438sec (6651 kBytes/sec) 1:10.449r 0.520u 8.190s 12% 0M 0+0k 0st 0+0io 0pf+0w
star -xp -xdot -time -no-fsync < /tmp/linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 86.624sec (5408 kBytes/sec) 1:26.636r 0.300u 3.960s 4% 0M 0+0k 0st 0+0io 0pf+0w
gtar --totals -xf /tmp/linux-3.5.1.tar Gesamtzahl gelesener Bytes: 479733760 (458MiB, 4,7MiB/s) 1:38.829r 0.440u 4.570s 5% 0M 0+0k 0st 0+0io 0pf+0w
Now ZFS being on a single disk like the UFS test before (let us omit the test with the compressed archive):
star -xp -xdot -time < /tmp/linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 394.783sec (1186 kBytes/sec) 6:34.795r 0.580u 8.250s 2% 0M 0+0k 0st 0+0io 0pf+0w
As expected: ZFS is slow if you force it to grant a stable state.
star -xp -xdot -time -no-fsync < /tmp/linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 11.085sec (42259 kBytes/sec) 11.096r 0.290u 4.380s 42% 0M 0+0k 0st 0+0io 0pf+0w
gtar --totals -xf /tmp/linux-3.5.1.tar Gesamtzahl gelesener Bytes: 479733760 (458MiB, 39MiB/s) 11.929r 0.360u 4.260s 38% 0M 0+0k 0st 0+0io 0pf+0w
As you see, if you permit star to be as insecure as gtar always is, star is always faster than gtar.
Now the same on a thumper (ZFS on RAIDZ2):
star -xp -xdot -time < /tmp/linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 349.575sec (1340 kBytes/sec) 5:49.595r 0.690u 11.270s 3% 0M 0+0k 0st 0+0io 0pf+0w
star -xp -xdot -time -no-fsync < /tmp/linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k). star: Total time 4.626sec (101251 kBytes/sec) 4.654r 0.330u 4.990s 115% 0M 0+0k 0st 0+0io 0pf+0w
gtar --totals -xf /tmp/linux-3.5.1.tar Total bytes read: 479733760 (458MiB, 85MiB/s) 5.510r 0.430u 4.130s 82% 0M 0+0k 0st 0+0io 0pf+0w
Jörg
Dennis Clarke dclarke@blastwave.org wrote:
A little data never hurts. Even if the numbers mean little.
test 1 - Debian Linux 6.0.5 on x86_64
Given the fact, that you did not run star -no-fifo, you compare an insecure implementation (gtar never calls fsync(2)) with a secure by default implementation (star).
Comparison numbers are only valid of the tests run are the same.
So here is the UFS test once more without the compression and with -no-fifo :
jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/star -x -xdir -xdot -no-fifo -U file=../linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k).
real 27:44.237 user 2.031 sys 42.419
Not a good result.
dc
Dennis Clarke dclarke@blastwave.org wrote:
Given the fact, that you did not run star -no-fifo, you compare an insecure implementation (gtar never calls fsync(2)) with a secure by default implementation (star).
Comparison numbers are only valid of the tests run are the same.
So here is the UFS test once more without the compression and with -no-fifo :
jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/star -x -xdir -xdot -no-fifo -U file=../linux-3.5.1.tar star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k).
real 27:44.237 user 2.031 sys 42.419
Not a good result.
So try to think about the reasons..... star is definitely not the reason. The fact that you spend 10x the amount of expectes SYS CPU seems to lead to a problem on your system.
Also the USER CPU time is 8x the expected amount. Did you run this test in _very_ old hardware?
Jörg
Comparison numbers are only valid of the tests run are the same.
So here is the UFS test once more without the compression and with -no-fifo :
jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/star -x -xdir -xdot
-no-fifo -U file=../linux-3.5.1.tar
star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k).
real 27:44.237 user 2.031 sys 42.419
Not a good result.
So try to think about the reasons..... star is definitely not the reason. The fact that you spend 10x the amount of expectes SYS CPU seems to lead to a problem on your system.
Also the USER CPU time is 8x the expected amount. Did you run this test in _very_ old hardware?
Jörg
It would be reasonable to think of a Sun Fire V480 as old hardware yes, but not *very* old. I do have *very* old if you would like me to test there?
The server runs fine, is patched up to date. The UFS filesystem that was used is actually the root filesystem and it is a metadevice mirror of the two internal disks.
jupiter-sparc-SunOS5.10 # metastat d0 d0: Mirror Submirror 0: d10 State: Okay Submirror 1: d20 State: Okay Pass: 1 Read option: geometric (-g) Write option: parallel (default) Size: 33560448 blocks (16 GB)
d10: Submirror of d0 State: Okay Size: 33560448 blocks (16 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t0d0s0 0 No Okay Yes
d20: Submirror of d0 State: Okay Size: 33560448 blocks (16 GB) Stripe 0: Device Start Block Dbase State Reloc Hot Spare c1t1d0s0 0 No Okay Yes
Device Relocation Information: Device Reloc Device ID c1t0d0 Yes id1,ssd@n20000004cfb6f0ff c1t1d0 Yes id1,ssd@n20000004cfa10ec9
However I do also have :
jupiter-sparc-SunOS5.10 # luxadm probe No Network Array enclosures found in /dev/es
Found Fibre Channel device(s): Node WWN:20000004cfb6f0ff Device Type:Disk device Logical Path:/dev/rdsk/c1t0d0s2 Node WWN:20000004cfa10ec9 Device Type:Disk device Logical Path:/dev/rdsk/c1t1d0s2 Node WWN:20000018625de4c6 Device Type:Disk device Logical Path:/dev/rdsk/c2t16d0s2 Node WWN:20000018625d599d Device Type:Disk device Logical Path:/dev/rdsk/c2t17d0s2 Node WWN:20000018625f11cb Device Type:Disk device Logical Path:/dev/rdsk/c2t18d0s2 Node WWN:500000e0148b6620 Device Type:Disk device Logical Path:/dev/rdsk/c2t19d0s2 Node WWN:20000014c3a51579 Device Type:Disk device Logical Path:/dev/rdsk/c2t20d0s2 Node WWN:20000018625fe1a2 Device Type:Disk device Logical Path:/dev/rdsk/c2t21d0s2 Node WWN:20000014c3a8ee8b Device Type:Disk device Logical Path:/dev/rdsk/c2t22d0s2 Node WWN:20000018629c6c36 Device Type:Disk device Logical Path:/dev/rdsk/c5t0d0s2 Node WWN:2000000c5042657d Device Type:Disk device Logical Path:/dev/rdsk/c5t1d0s2 Node WWN:20000018629c2baf Device Type:Disk device Logical Path:/dev/rdsk/c5t2d0s2 Node WWN:2000000087b10c29 Device Type:Disk device Logical Path:/dev/rdsk/c5t3d0s2 Node WWN:20000018625dda0c Device Type:Disk device Logical Path:/dev/rdsk/c5t4d0s2 Node WWN:500000e010f28870 Device Type:Disk device Logical Path:/dev/rdsk/c5t5d0s2 Node WWN:20000018625dfa91 Device Type:Disk device Logical Path:/dev/rdsk/c5t6d0s2
Since I have a pile of fibre disks I can perhaps isolate one of them and test with UFS on a single disk.
This one may do fine :
jupiter-sparc-SunOS5.10 # luxadm display 2000000087b10c29 DEVICE PROPERTIES for disk: 2000000087b10c29 Status(Port B): O.K. Vendor: HITACHI Product ID: HUS1030FASUN300G WWN(Node): 2000000087b10c29 WWN(Port B): 2200000087b10c29 Revision: 2A08 Serial Num: 49VXAJNS 55VAXAJNSA Unformatted capacity: 286102.281 MBytes Write Cache: Enabled Read Cache: Enabled Minimum prefetch: 0x0 Maximum prefetch: 0xffff Device Type: Disk device Path(s): /dev/rdsk/c5t3d0s2 /devices/pci@8,600000/pci@2/SUNW,qlc@4/fp@0,0/ssd@w2200000087b10c29,0:c,raw
I can try to isolate that and use it with UFS but no hurry, I have other things to do also.
Dennis
Dennis Clarke dclarke@blastwave.org wrote:
Comparison numbers are only valid of the tests run are the same.
So here is the UFS test once more without the compression and with -no-fifo :
jupiter-sparc-SunOS5.10 # ptime /opt/schily/bin/star -x -xdir -xdot
-no-fifo -U file=../linux-3.5.1.tar
star: 46849 blocks + 0 bytes (total of 479733760 bytes = 468490.00k).
real 27:44.237 user 2.031 sys 42.419
Not a good result.
So try to think about the reasons..... star is definitely not the reason. The fact that you spend 10x the amount of expectes SYS CPU seems to lead to a problem on your system.
Also the USER CPU time is 8x the expected amount. Did you run this test in _very_ old hardware?
Jörg
It would be reasonable to think of a Sun Fire V480 as old hardware yes, but not *very* old. I do have *very* old if you would like me to test there?
Well, this machine is 11 years old now.
This explains the large amount of CPU time.
The server runs fine, is patched up to date. The UFS filesystem that was used is actually the root filesystem and it is a metadevice mirror of the two internal disks.
A simple mirror is slow.
However, a 2300 GB FCAL drive should be faster.
But... did you turn on logging?
Jörg
Well, this machine is 11 years old now. This explains the large amount of CPU time.
Quad 900MHz UltraSparc III processors are more than enough to handle a simple filesystem.
The server runs fine, is patched up to date. The UFS filesystem that
was
used is actually the root filesystem and it is a metadevice mirror of the two internal disks.
A simple mirror is slow.
I don't agree.
jupiter-sparc-SunOS5.10 # pwd /usr/local/src jupiter-sparc-SunOS5.10 # which mkfile /usr/sbin/mkfile
jupiter-sparc-SunOS5.10 # ptime /usr/sbin/mkfile 1024m one_gig.dat
real 30.874 user 0.206 sys 5.574 jupiter-sparc-SunOS5.10 # ls -l total 3035664 -rw-r--r-- 1 root root 479733760 Aug 9 15:44 linux-3.5.1.tar -rw------T 1 root root 1073741824 Aug 13 20:00 one_gig.dat jupiter-sparc-SunOS5.10 # rm one_gig.dat jupiter-sparc-SunOS5.10 #
Meanwhile in another xterm I ran iostat :
$ iostat -xc -d ssd0 ssd2 5 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.2 4.8 1.5 46.2 0.0 0.1 34.2 0 3 1 2 0 97 ssd2 0.1 4.6 1.2 46.1 0.0 0.1 34.7 0 3 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 9.2 0.0 5.0 0.0 0.2 20.7 0 5 0 1 0 99 ssd2 0.0 6.4 0.0 3.6 0.0 0.1 15.5 0 3 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 0 1 0 99 ssd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.4 111.4 2.0 12546.4 6.3 4.3 94.9 25 36 1 2 0 97 ssd2 0.0 104.6 0.0 12476.0 5.9 4.1 96.1 24 31 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.8 326.4 6.4 37426.6 28.6 13.1 127.4 80 100 0 6 0 93 ssd2 0.0 317.8 0.0 37423.8 26.4 12.5 122.6 76 91 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.2 322.6 1.6 35021.9 23.2 12.5 110.7 73 98 0 17 0 83 ssd2 1.2 312.8 9.6 35104.7 23.0 12.5 113.1 73 92 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 2.2 294.8 20.8 30385.4 16.8 11.9 96.8 64 98 0 11 0 89 ssd2 1.4 284.8 11.2 30418.4 16.1 11.2 95.4 63 90 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 314.0 0.0 33055.0 20.3 12.2 103.6 69 98 0 5 0 95 ssd2 1.8 300.2 14.4 32987.6 19.1 11.7 102.0 67 89 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.4 312.8 3.2 33398.0 26.7 12.8 126.0 75 98 0 6 0 94 ssd2 1.6 304.4 12.8 33424.0 24.0 11.9 117.1 71 89 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 2.4 278.6 19.2 28227.5 24.8 10.8 126.9 63 94 1 7 0 92 ssd2 1.0 259.2 8.0 28187.7 21.3 9.6 118.6 57 75 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 9.4 0.0 27.6 0.0 0.2 25.1 0 6 0 1 0 99 ssd2 0.0 6.6 0.0 26.2 0.0 0.1 15.3 0 3 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 18.0 0.0 62.3 0.0 0.4 21.2 0 11 0 1 0 98 ssd2 0.2 13.2 0.2 59.9 0.0 0.2 16.8 0 7 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.4 29.0 3.2 307.1 0.0 0.8 26.2 0 15 0 2 0 98 ssd2 0.2 29.0 1.6 306.7 0.0 0.8 29.1 0 15 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 0.8 0.0 0.4 0.0 0.0 10.0 0 1 0 5 0 95 ssd2 0.0 0.0 0.0 0.0 0.0 0.0 0.0 0 0 extended device statistics cpu device r/s w/s kr/s kw/s wait actv svc_t %w %b us sy wt id ssd0 0.0 1.8 0.0 1.2 0.0 0.0 7.8 0 1 0 1 0 99 ssd2 0.0 0.6 0.0 0.6 0.0 0.0 7.1 0 0 ^C$ $
bulk throughput would be 1024m/30.874sec =
jupiter-sparc-SunOS5.10 # dc 9k 1024 30.874 / p 33.167066139
Not bad.
However, a 2300 GB FCAL drive should be faster.
Probably, but not much.
But... did you turn on logging?
ufs logging ? Yes. Always. However that would be the same with the gtar and star tests also.
Dennis