[CentOS] CentOS 6 Partitioning Map/Schema

Thu Sep 1 17:34:47 UTC 2011
Jonathan Vomacka <juvix88 at gmail.com>

Lamar,

Excellent email. Thank you so much you have been very informative!!!

On 9/1/2011 11:29 AM, Lamar Owen wrote:
> On Wednesday, August 31, 2011 09:21:25 PM Jonathan Vomacka wrote:
>> I was
>> recently told that this is an old style of partitioning and is not used
>> in modern day Linux distributions.
>
> The only thing old-style I saw in your list was the separate /usr partition.  I like having separate /var, /tmp, and /home.  /var because lots of data resides there that can fill partitions quickly; logs, for instance.  I have built machines where /var/log was separate, specifically for that reason.
>
>> So more accurately, here are my
>> questions to the list:
>>
>> 1) What is a good partition map/schema for a server OS where it's
>> primary purpose is for a LAMP server, DNS (bind), and possibly gameservers
>
> Splitting out filesystems into partitions makes sense primarily, in my opinion and experience, in seven basic aspects:
> 1.) I/O load balancing across multiple spindles and/or controllers;
> 2.) Disk space isolation in case of filesystem 'overflow' (that is, you don't want your mail spool in /var/spool/mail overflowing to be able to corrupt an online database in, say, /var/lib/pgsql/data/base) (and while quotas can help with this when two trees are not fully isolated, filesystems in different partitons/logical volumes have hard overflow isolation);
> 3.) In the case of really large data stores with dynamic data, isolating the impact of filesystem corruption;
> 4.) The ability to stagger fsck's between boots (the fsck time doesn't seem to increase linearly with filesystem size);
> 5.) I/O 'tiering' (like EMC's FAST) where you can allocate your fastest storage to the most rapidly changing data, and slower storage to data that doesn't change frequently;
> 6.) Putting things into separate filesystem forces the admin to really have to think through and design the system taking into account all the requirements, instead of just throwing it all together and then wondering why performance is suboptimal;
> 7.) Filesystems can be mounted with options specific to their use cases, and using filesystem technology appropriate to the use case (noexec, for instance, on filesystems that have no business having executables on them; enabling/disabling journalling and other options as appropriate, and using XFS, ext4, etc as appropriate, just to mentiona a few things).
>
>> 2) CentOS docs recommend using 10GB SWAP for 8GB of RAM. 1X the amount
>> of physical memory + 2GB added.
>
> If you put swap on LVM and give yourself room to grow you can increase swap space size at will if you should find you need to do so.  Larger RAM (and virtual RAM embodied by swap) does not always make things faster.  I have a private e-mail from an admin of a large website showing severe MySQL performance issues that were reduced by making the RAM size smaller (turned out to be a caching mismanagement thing with poorly written queries that caused it).
>
> Consider swap to be a safety buffer; the Linux kernel is by default configured to overcommit memory, and swap exists to prevet the oom-killer from reaping critical processes in this situation.  Tuning swap size and the 'swappiness' of the kernel along with the overcommit policy should be done together; the default settings produce the default recommendation of 'memory size plus 2GB' that was for CentOS 5.  Not too long ago, the recommendation was for swap to be twice the memory size.
>
> Multiple swap partitions can improve performance if those partitions are on different spindles; however, this reduces reliability, too.  I don't have any experience with benchmarking the performance of multiple 2GB swap spaces; I'd find results of such benchmarks to be useful information.
>
>> 3) Is EXT4 better or worse to use then XFS for what I am planning to use
>> the system for?
>
> That depends; consult some file system comparisons (the wikipedia file system comparison article is a good starting place).  I've used both; and I still use both.  XFS as a filesystem is older and presumably more mature than ext4, but age is not the only indicator of something that will work for you.  One thing to remember is that XFS filesystems cannot currently be reduced in size, only increased.  Ext4 can go either way if you realize you made too large of a filesystem.
>
> XFS is very fast to create, but repairing requires absolutely the most RAM of any recovery process I've ever seen.  XFS has seen a lot of use in the field, particularly with large SGI boxes (Altix series, primarily) running Linux (with the requisite 'lots of RAM' required for repair/recovery....
>
> XFS currently is the only one where I have successfully made a large than 16TB filesystem.  Don't try that on a 32-bit system (in fact, if you care about data integrity, don't use XFS on a 32-bit system at all, unless you have rebuilt the kernel with 8k stacks).  The mkfs.xfs on a greater than 16TB partition/logical volume will execute successfully on a 32-bit system (the last time I tried it), but as soon as you go over 16TB with your data you will no longer be able to mount the filesystem.  The wisdom of making a greater than 16TB filesystem of any type is left as an exercise for the reader....
> _______________________________________________
> CentOS mailing list
> CentOS at centos.org
> http://lists.centos.org/mailman/listinfo/centos