On 08/31/2011 09:21 PM Jonathan Vomacka wrote: > Good Evening All, > > I have a question regarding CentOS 6 server partitioning. Now I know > there are a lot of different ways to partition the system and different > opinions depending on the use of the server. I currently have a quad > core intel system running 8GB of RAM with 1 TB hard drive (single). In > the past as a FreeBSD user, I have always made a physical volume of the > root filesystem (/), SWAP, /tmp, /usr, /var, and /home. In the > partitioning manager I would always specify 10GB for root, 2GB or so for > SWAP, 20GB var, 50GB usr, 10GB /tmp, and allocate all remaining space to > my home directory as my primary data volume (assuming all my > applications are installed and ran from my home directories). I was > recently told that this is an old style of partitioning and is not used > in modern day Linux distributions. So more accurately, here are my > questions to the list: > > 1) What is a good partition map/schema for a server OS where it's > primary purpose is for a LAMP server, DNS (bind), and possibly gameservers If you have a currently running system serving the same purpose and running the same OS version, consult the partitioning scheme there, adjusting the sizes up or down depending upon how much is used. The size of /home is going to depend upon how many users will have accounts on the system and how much space you're going to allow per user. I'd double the space for a typical user, then multiply that by the number of users. Set up a quota system with hard and soft limits to eliminate surprises. And why don't you use LVM? Then you can adjust the sizes of the volumes as you need them. If this is to be an enterprise system on which you'll be doing live backups, you may also want to set up a snapshot LV. > > 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount > of physical memory + 2GB added. (Reference: > http://www.centos.org/docs/5/html/Installation_Guide-en-US/s1-diskpartitioning-x86.html). > I was told this is ridiculous and will severely slow down the system. Is > this true? If so, what is a good swap space to use for 8GB of RAM? The > university of MIT recommends making MULTIPLE 2GB swap spaces equaling > 10GB if this is the case. Please help! In the absence of actual evidence to the contrary, I'd go with the recommendations in the docs regarding swap. As for swap 'severely slowing down the system', I think that's bunk. Well, theoretically any empty disk space is going to slow down disk reads a bit, but then so would occupied disk space which isn't used, but neither of these is a 'severe problem'. So she must be speaking of the algorithm used in determining what and when something is written to swap. To have an intelligent opinion on the usefulness and efficiency of swap would require a detailed understanding of that algorithm alongside one of the system as a whole. Did she do her master's thesis or doctoral dissertation on this topic? Okay, maybe swap is totally unnecessary and the developers are keeping everyone in the dark about the details just to create and keep jobs for themselves. But the code is open source, so that sounds too much like an April Fool's Day posting to slashdot. As for MIT's suggestion, yeah, that is a good idea. I did this on some machines years back and noticed a considerable speed increase. In most instances though all swap spaces should have the same priority. > > 3) Is EXT4 better or worse to use then XFS for what I am planning to use > the system for? Which features of each might you plan on making use of? > > Thanks in advance for all your help guys If you're truly interested in performance, run some metrics before any users get on the system and periodically thereafter. This will give you a baseline for evaluation of changes you'll inevitably make and others which just happen. I predict a long and contentious thread on these topics (When do we not have that?) along with scattered earthquakes and sunspots. I'll be missing the impact of most of those as I'm bringing this machine down for 'maintenance'. Just thought everyone would be interested in knowing that. :) > > Kind Regards, > Jonathan Vomacka