[CentOS] Good value for /proc/sys/vm/min_free_kbytes

Fri Dec 8 02:19:01 UTC 2006
John R Pierce <pierce at hogranch.com>

> Just out of curiosity John, are you allowed to give us some hints 
> about what your system does? If you are posting on the CentOS list i 
> presume you are running CentOS, rather than "a similar upstream product".
> Also I'd love to know what you mean by "you do NOT want to have a 
> single 10TB volume" - are you referring to performance or 
> single-point-of-failure issues?
we're drifting pretty far off topic for this list, I was going to let 
this thread drop... but...

I work for a big 'widget' maker.    we develop and pilot our complex 
little 'widgets' in the states, then volume production is done in our 
large scale plants overseas.     my department is the core development 
group for the both the core database and the middleware used in our 
inhouse process tracking system.  

I use centos in my groups development lab, mostly for prototyping our 
messaging/middleware servers...  We've validated our databases on 
rhel/centos, and smaller design center operations who don't need the big 
sun/emc approach are starting to deploy with oracle on rhel on opteron 
servers instead of the traditional Solaris/Sun platforms.

re: single 10TB volume, I think its both too many eggs in one basket and 
performance.    I'm not in operations, I'm the systems guy in the 
development group, so alot of what is decided in production I hear only 
2nd hand.   I've been told that the big SANs they use don't do as well 
with very large volume groups as they do with more smaller ones...  at 
the big sites, the SAN is like ~1000 72GB 15K rpm fc drives.    I 
believe Operations uses Veritas VxVM & VxFS on the big Suns (E20k, etc), 
and it may have performance issues with multiterabyte single file 
systems, too.   Our databases do a LOT of disk writes from synchronous 
commits, much of the reads are cached (these servers have a LOT of 
ram).   The way our database is organized, there's a nested set of 
tablespace groups that specific tables are bucketed in, so we can run 
quite nicely with 3 raids on a smaller system, and 36 raids on a really 
big system, this makes the space allocation fairly flexible, yet avoids 
spindle collisions during commonly used complex joins and so forth..