On 1/27/2010 8:30 AM, Ross Walker wrote: > >> This is part of what I was planning to do, there are a lot of stuff I >> am planning to split out into their own tables with reference key. The >> problem is I'm unsure whether the added overheads of joins would >> negate the IO benefits hence trying to figure out more about how >> Centos/Linux does the caching. > > The idea behind it is you don't need to execute a join if you don't > need the extra data. I've seen mysql do some really stupid things, like a full 3-table join into a (huge)disk temporary table when the select had a 'limit 10' and was ordered by one of the fields that had an index. >>> If you wanted to split it up even more you could look into some sort >>> of PHP distributed cache/processing system and have PHP processed >>> behind Apache. >> >> Thanks for the heads up, I didn't realize it was possible to separate >> the PHP processing from Apache itself. However, for the time being, >> I'm probably still limited to a single server situation so will keep >> this in mind for future. > > I was actually thinking of distributing the caching of the data rather > then the PHP processing, but you can have multiple PHP front-end > servers, one or two mid-line caching (and possibly pre-processing) > servers and then a couple of backend DB servers (replicas) for reads > and a master for writes. memcache is still the quick-fix here. You can distribute the cache across any nearby machines regardless of whether or not you run php there. And you can often cache higher level objects like parts of the page that might be reused to offload even more than the database activity. -- Les Mikesell lesmikesell at gmail.com