On Jan 25, 2010, at 6:22 PM, JohnS <jses27 at gmail.com> wrote: > > On Mon, 2010-01-25 at 09:45 -0500, Ross Walker wrote: >> On Jan 25, 2010, at 6:41 AM, Noob Centos Admin >> <centos.admin at gmail.com> wrote: >> >>> Hi, >>> >>>> 20 feilds or columns is really nothing. BUT That's dependant on the >>>> type >>>> of data being inserted. >>> >>> 20 was an arbitary number :) >>> >>>> Ok so break the one table down create 2 or more, then you will have >>>> "Joins" & clustered indexes thus slowing you down more possibly. >>>> That >>>> is greatly dependant on your select, delete, and update scripts. >>> >>> That was the reason the original develop gave for having these >>> massive >>> rows! Admittedly it is easier to read but when each row also >>> contains >>> text/blob fields, they tend to grow rather big. Some users have been >>> complaining the server seems to be getting sluggish so I'm trying to >>> plan ahead and make changes before it becomes a real problem. >> >> Split the TEXT/BLOB data out of the primary table into tables of >> their >> own indexed to the primary table by it's key column. > > Would seem like a good idea in Theory only. Your back to all but > Basic > Joins on the Table now. I'm sure not all transactions will require the BLOB/TEXT fields. > Instead look at the way your PHP Code is > Encoding the BLOB Data and if your really need the speed since now > it's > MySQL DB, make you own custom C API for mysql to encode the BLOB. The > DB can do this like that much faster then your PHP app ever thought > of. I believe the OP said he was running postgresql. > >>>> Possibly very correct, but Nate is very correct on how you are >>>> accessing >>>> the DB ie direct i/o also. Your fastest access come in optimized >>>> SPROCS >>>> and Triggers and TSQL. Slam enough memory into the server and load >>>> it >>>> in memory. >>> >>> It's an old server with all slots populated so adding memory is >>> not an >>> option. I thought of doing an image and porting it into a VM on a >>> newer/faster machine. But then at the rate this client's usage >>> growing, I foresee that as simply delaying the inevitable. >> >> Think about distributing the parts to different boxes as necessary. >> You can start with the DBMS which is the logical candidate. >> >>>> If speed is what your after why are you worried about VFS? >>>> CentOS does support Raw Disk Access (no filesystem). >>> >>> To be honest, I don't really care about VFS since I didn't know it >>> existed until I started looking up Linux file/disk caching :D >>> >>> So I assumed that was what PHP and DBMS like MySQL/Postgresql >>> would be >>> working through. It made sense since they wouldn't need to worry >>> about >>> what filesystem was really used. >> >> On the DBMS backend, give it plenty of memory, good storage for the >> workload and good networking. >> >> On the Apache/PHP side, look for a good DBMS inter-connect and some >> PHP caching module and of course enough CPU for the PHP code and >> network for Apache+DBMS inter-connect. > > Make sure PHP is Creating and Tearing down connections on insert and > flush() connection.close() > >> If you wanted to split it up even more you could look into some sort >> of PHP distributed cache/processing system and have PHP processed >> behind Apache. > > You really need a good SQL Book to sit down and read like the one by: > Peter Brawley and Arthur Fuller @ artfullsoftware.com, coauthored by > the > original mysql owners. It is the best one you will get. Is it restricted to purely MySQL? -Ross