[CentOS] Finding i/o bottleneck
Nicolas Ross
rossnick-lists at cybercat.ca
Wed Sep 28 19:54:40 UTC 2011
> >> Not sure how gfs2 deals with client caching, but in other scenarios
> >> it's probably easier to just throw a lot of ram in the system and let
> >> the filesystem cache do its job. You still have to deal with
> >> applications that need to fsync(), though.
> >
> > Our nodes all have 12 gigs of ddr3 ram, that should be plenty. The node
> > where the application I'm dealing with is has about half used.
>
> Yes, but how does gfs2 deal with filesystem caching? There must be
> some restriction and overhead to keep it consistent across nodes.
Yes indeed, afaik, when reading (wich is mostly our case), the node reads
the data from the disk and is able to cache it. When a node is trying to
write to that file, it must tell other nodes to flush their cache for that
file. But that is only my understanding of the mechanics of glocks, I might
be wrong.
I did opened up a ticket with RH to help find the i/o contention source.
Regards,
More information about the CentOS
mailing list