[CentOS] Need advice on storage

redhat at mckerrs.net redhat at mckerrs.net
Tue Nov 13 07:02:34 UTC 2007


----- Original Message ----- 
From: "boisvert guy" <boisvert.guy at videotron.ca> 
To: "CentOS mailing list" <centos at centos.org> 
Sent: Tuesday, November 13, 2007 4:47:35 PM (GMT+1000) Australia/Brisbane 
Subject: Re : Re: Re : Re: [CentOS] Need advice on storage 

----- Message d'origine ----- 
De: "redhat at mckerrs.net" <redhat at mckerrs.net> 
Date: Lundi, Novembre 12, 2007 10:28 pm 
Objet: Re: Re : Re: [CentOS] Need advice on storage 
À: CentOS mailing list <centos at centos.org> 
> 
> 
> Guy, 
> 
> how many drives can the server take ? and how many are there in 
> it just now ? 
> 
> I'd even go as far as to say install the OS and the data on the 
> 3 x 320gb sata drives and ditch the 80gb IDE. That is what I do 
> and it is one of the strengths of software raid, the fact that 
> you can mix and match raid types on the same physical disks. I'd 
> go for a triple mirror for /boot and install the rest of the OS 
> on the RAID-5 set or along with the data (grub can't boot of 
> raid-5 as far as I'm aware). 
> 
> 
> Alternatively if re-installing or copying your current OS onto 
> new drives is not practical at this point I'd get 2 x 320g 
> drives mirror them and also get that extra 1gb of RAM. The extra 
> RAM could make more of a difference than 10k rpm drives. I dont 
> think that you'll get anywhere near the performance you'd expect 
> from a stripe compared to a mirror. As mentioned earlier, 
> Raptors, IMO, are overrated and they are way hotter and noisier 
> due to their higher spindle speed. 
> 
> How is the box performing ? what are you noticing when it is 
> slow ? Wait IO ? Can you determine if it is IO bound or CPU 
> bound ? 
> 
> 
> Cheers, 
> 
> Brian. 
> 


The server can physically take 5 drives. 

As for RAM, suggestion taken. I'll go buy 1 more Gig. I know that Linux is aggressive when using RAM / Cache so i can just help. 

When all clients are connected and when they seem to refresh their mail cache and office activity is high, %iowait goes up to 45-55. I have a script that check and log lsof used by Communigate (every 5 minutes). Sometimes, i see lsof as high as 130 000. I had to raise a couple of parameter (in sysctl.conf and limits.conf)because we had Communigate disconnecting clients and i saw messages like: 

IMAP the 'accept' call failed. Error Code=too many files open in this process 

I'll probably start another thread for this problem. The default settings of CentOS for open files were very low. I had to raise the values and now i'm at 300 000 and i Communigate still display errors from time to time even if my not so precise lsof logging script never shows more than 130 000. I had even "too many files" errors with lsof log showing as low as 65 000. I'm a little confused. Maybe i'll need more elaborate debugging tools. 

As you said, i'm in a case where migrating system drive to a RAID 5 array wouldn't be easy. 

But i have a new possible solution that arosed when i went to see the physical system tonight: There is a Promise VTRAK 15100 SCSI enclosure in the rack with a free SCSI bus and space to put new SATA drives inside. I think we have a spare Adaptec 39160 so tomorrow, i'll check that (it's a PCI-X card but i think it should work PCI too). If i have the Adaptec, then i'll probably buy 4 x Western-Digital 250 Gigs "YS" (RAID serie) @ about 75$ each, put them inside the VTRAK and configure them in RAID 10 for a total of 500 Gigs. I briefly checked the VTRAK documentation and they were talking about "up to 200 MBps" throughput and support for NCQ. It remains to be seems how it could perform with 4 disks in RAID 10. But it should easily beat a single 200 Gigs IDE drive! 

For now, i did hdparm -t /dev/hdc (stoopid IDE mail drive!) and it gave me 38.21 MB/sec. 

Thanks! 

Guy Boisvert 


-- 
This message has been scanned for viruses and 
dangerous content by MailScanner , and is 
believed to be clean. 



Nice kit to have lying around doing nothing ! 


The PCI-X SCSI card should work in the asus board, but It'll run as 32bit PCI and therefor your performance will be a maximum of 128mb/s as that is the theoretical maximum throughput of the 32bit PCI bus. Given that your NIC and 80gb IDE system hdd are on the same PCI bus you will be lucky to get 75mb/s I'd say. But that's almost twice as good as you are currently getting ! 

Good luck. 



-- 
This message has been scanned for viruses and
dangerous content by MailScanner, and is
believed to be clean.

-------------- next part --------------
An HTML attachment was scrubbed...
URL: <http://lists.centos.org/pipermail/centos/attachments/20071113/ce6a9567/attachment.html>


More information about the CentOS mailing list