[CentOS] file i/o operations...

bruce

bedouglas at earthlink.net
Fri Aug 25 17:10:58 UTC 2006


hi...

i'm trying to determine which is the better way/approach to go. should an
app do a great deal of file i/o, or should it do a great deal of read/writes
to a mysql db...

my test app will create a number of spawned child processes, 1000's of
simultaneous processes, and each child process will create data. the data
will ultimately need to be inserted into a db.

Approach 1
-----------
if i have each child app write to a file, i'm going to have a serious hit on
the disk, for the file i/o, but i'm pretty sure Centos/RH could handle it.
(although, to be honest, i don't know if there's a limit to the number of
simultaneous file descriptors that the OS allows to be open at the same
time.) i'm assuming that the number is multiples of magnitudes more than the
number of simultaneous connections i can have with a db....

i could then have a process/app collect the information from each output
file, writing the information to the db, and deleting the output files as
required.

Approach 2
----------
i could have each child app write to a local db, with each child app,
waiting to get the next open db connection. this is limited, as i'd run into
the max connection limit for the db. i'd also have to implement a process to
get the information from the local db, to the master db. ..

Approach 3
-----------
i could have each child app write directly to the db.. the problem with this
approach is that the db has a max regarding the number of simultaneous
connections, based on system resources. this would be the cleanest
solution..


so... anybody have any thoughts/comments as to how one can essentially
accept 1000's-10000's of simultaneous hits with an app...

i've been trying to find out if there's any kind of distributed
parent/child/tiered kind of app, where information/data is more or less
collected and received at the node level...

thanks

-bruce




More information about the CentOS mailing list