Hi :)
On Mon, Mar 7, 2011 at 12:12 PM, wessel van der aart wessel@postoffice.nl wrote:
Hi All,
I've been asked to setup a 3d renderfarm at our office , at the start it will contain about 8 nodes but it should be build at growth. now the setup i had in mind is as following: All the data is already stored on a StorNext SAN filesystem (quantum ) this should be mounted on a centos server trough fiber optics , which in its turn shares the FS over NFS to all the rendernodes (also centos).
From what I can read, you have 1 NFS server only and a separate
StoreNext MDC. Is this correct?
Now we've estimated that the average file send to each node will be about 90MB , so that's what i like the average connection to be, i know that gigabit ethernet should be able to that (testing with iperf confirms that) but testing the speed to already existing nfs shares gives me a 55MB max. as i'm not familiar with network shares performance tweaking is was wondering if anybody here did and could give me some info on this? Also i thought on giving all the nodes 2x1Gb-eth ports and putting those in a BOND, will do this any good or do i have to take a look a the nfs server side first?
Things to check would be: - Hardware: * RAM and cores on the NFS server * # of GigE & FC ports * PCI technology you're using: PCIe, PCI-X, ... * PCI lanes & bandwidth you're using up * if you are sharing PCI buses between different PCI boards (FC and GigE): you should NEVER do this. If you have to share a PCI bus, share it between two PCI devices which are the same. That is you can share a PCI bus between 2 GigE cards or between 2 FC cards, but never mix the devices. * cabling * switch configuration * RAID configuration * cache configuration on the RAID controller. Cache mirroring gives you more protection, but less performance.
- software: * check the NFS config. There are some interesting tips if you google around.
HTH
Rafa