[CentOS] compare zfs xfs and jfs o

Dennis Clarke dclarke at blastwave.org
Mon Aug 13 20:04:01 UTC 2012



> Well, this machine is 11 years old now.
> This explains the large amount of CPU time.

Quad 900MHz UltraSparc III processors are more than
enough to handle a simple filesystem.  
 
> > The server runs fine, is patched up to date. The UFS filesystem that 
> was 
> > used is actually the root filesystem and it is a metadevice mirror of
> > the two internal disks. 
> 
> A simple mirror is slow.

I don't agree. 

jupiter-sparc-SunOS5.10 # pwd 
/usr/local/src
jupiter-sparc-SunOS5.10 # which mkfile 
/usr/sbin/mkfile

jupiter-sparc-SunOS5.10 # ptime /usr/sbin/mkfile 1024m one_gig.dat 

real       30.874
user        0.206
sys         5.574
jupiter-sparc-SunOS5.10 # ls -l 
total 3035664
-rw-r--r--   1 root     root     479733760 Aug  9 15:44 linux-3.5.1.tar
-rw------T   1 root     root     1073741824 Aug 13 20:00 one_gig.dat
jupiter-sparc-SunOS5.10 # rm  one_gig.dat
jupiter-sparc-SunOS5.10 # 


Meanwhile in another xterm I ran iostat : 

$ iostat -xc -d ssd0 ssd2 5 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.2    4.8    1.5   46.2  0.0  0.1   34.2   0   3   1  2  0 97
ssd2      0.1    4.6    1.2   46.1  0.0  0.1   34.7   0   3 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0    9.2    0.0    5.0  0.0  0.2   20.7   0   5   0  1  0 99
ssd2      0.0    6.4    0.0    3.6  0.0  0.1   15.5   0   3 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0   0  1  0 99
ssd2      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.4  111.4    2.0 12546.4  6.3  4.3   94.9  25  36   1  2  0 97
ssd2      0.0  104.6    0.0 12476.0  5.9  4.1   96.1  24  31 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.8  326.4    6.4 37426.6 28.6 13.1  127.4  80 100   0  6  0 93
ssd2      0.0  317.8    0.0 37423.8 26.4 12.5  122.6  76  91 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.2  322.6    1.6 35021.9 23.2 12.5  110.7  73  98   0 17  0 83
ssd2      1.2  312.8    9.6 35104.7 23.0 12.5  113.1  73  92 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      2.2  294.8   20.8 30385.4 16.8 11.9   96.8  64  98   0 11  0 89
ssd2      1.4  284.8   11.2 30418.4 16.1 11.2   95.4  63  90 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0  314.0    0.0 33055.0 20.3 12.2  103.6  69  98   0  5  0 95
ssd2      1.8  300.2   14.4 32987.6 19.1 11.7  102.0  67  89 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.4  312.8    3.2 33398.0 26.7 12.8  126.0  75  98   0  6  0 94
ssd2      1.6  304.4   12.8 33424.0 24.0 11.9  117.1  71  89 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      2.4  278.6   19.2 28227.5 24.8 10.8  126.9  63  94   1  7  0 92
ssd2      1.0  259.2    8.0 28187.7 21.3  9.6  118.6  57  75 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0    9.4    0.0   27.6  0.0  0.2   25.1   0   6   0  1  0 99
ssd2      0.0    6.6    0.0   26.2  0.0  0.1   15.3   0   3 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0   18.0    0.0   62.3  0.0  0.4   21.2   0  11   0  1  0 98
ssd2      0.2   13.2    0.2   59.9  0.0  0.2   16.8   0   7 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.4   29.0    3.2  307.1  0.0  0.8   26.2   0  15   0  2  0 98
ssd2      0.2   29.0    1.6  306.7  0.0  0.8   29.1   0  15 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0    0.8    0.0    0.4  0.0  0.0   10.0   0   1   0  5  0 95
ssd2      0.0    0.0    0.0    0.0  0.0  0.0    0.0   0   0 
                 extended device statistics                      cpu
device    r/s    w/s   kr/s   kw/s wait actv  svc_t  %w  %b  us sy wt id
ssd0      0.0    1.8    0.0    1.2  0.0  0.0    7.8   0   1   0  1  0 99
ssd2      0.0    0.6    0.0    0.6  0.0  0.0    7.1   0   0 
^C$ 
$ 

bulk throughput would be 1024m/30.874sec = 

jupiter-sparc-SunOS5.10 # dc 
9k
1024 30.874 / p
33.167066139

Not bad. 

> However, a 2300 GB FCAL drive should be faster.

Probably, but not much. 

> But... did you turn on logging?

ufs logging ? Yes. Always. However that would be the same with the gtar and star tests also. 

Dennis 




More information about the CentOS mailing list