[CentOS] Intel Xeon and hyperthreading

Thu Jun 8 16:45:40 UTC 2006
William L. Maltby <BillsCentOS at triad.rr.com>

On Thu, 2006-06-08 at 11:33 -0400, Sam Drinkard wrote:
> 
> Jim Perrin wrote:
> 
> <snip>

> I suppose, without having extensive knowledge about how the model 
> behaves internally, or for that matter, writing to disk, it really would 
> be trial and error.  There are tools, but I'm not experienced enough to 
> know what they are really telling me either.  I know the model does a 
> lot of reads from data, but don't know if that data is cached on read, 
> or if the whole file is read in at once, then the numbers are crunched 
> and written or what.
> <snip>

Knowledge = power, so pursue that. In the meantime, some things that
*used* to be good "Rules of Thumb" (if not sitting on it) that you might
be alert for as you investigate. Unfortunately, some would demand that
you have a test bed to be sure you can 1) recover if necessary and 2)
see if it really works without killing the end-user attitude and
(potentially) your future (although anyone from the 8088 days... ;-)

1) Many DBMS claim (and rightfully so) big performance gains if they are
on a "raw partition" rather than residing in a file system. If it's a
whole disk, you won't even need to partition it, Linux and at least one
other *IX support operations without such bothersome things.

2) If data is read predominately sequentially and you are in a FS, make
a huge block size when you make the file system. This has more effect if
the number crunching is input-data-intensive. Concomitantly, HDs with
large cache will contribute substantially to reduced wait time. As you
might surmise, all forms of cache are a total waste if the data (whether
key or base) is totally wrongly sequenced.

3) Ditto if random reads, but tend to be heavily grouped in consecutive
key ranges. In order for this to be effective, data should occur in most
frequently accessed order and, optimally, the index for that sequence
should be smack in the middle of the data blocks (i.e. first on disk is
appx 50% of the data, then the index with some slack for growth and then
the rest of the data). Better is index on another disk on another IDE
(or whatever) channel). Can everybody say "Bus Mastering"? It's hard to
keep things organized this way on a single disk, but since you're doing
a batch operation, maybe you can make a copy as needed (or there is an
HD backup frequently made?) and operate on that.

Anecdotally demonstrating the effectiveness of the statements about
ordering matching application, in 1988(?) a n00b admin on Oracle
couldn't understand why my app took 1:15 minutes to gen a screen. I
*knew* it wasn't my program, I'm a performance enthusiast. We talked a
bit, I told him to reload his DB in a certain sequence.

Result full screen in about 7 seconds.

To maintain that performance does require periodic reload/reorg.

4) Defrag the appropriate parts occasionally. Whether a file system or
raw partition, stuff must occasionally go into "overflow areas". Unless
your DBMS reorgs/defrags on-the-fly (bad for user response complaints),
the easiest/fastest/cheapest way is cross-HD copies/reloads. After each
cross-HD, scripts direct the apps to the right disk (on start up check a
timestamp and file/part name gened by the reorg process). This only
works if you have a quiescent time.

5) As Jim (IIRC) mentioned, avoiding access time updates can provide
gains. If you are fortunate enough that no writes need to be done to the
whole partion when you a crunching the chunks (if partition has only the
DB, if not, consider making it so Mr. Spock), remount the partion read
only (mount -o remount,ro <partition-or-label-or-even-mount-point>) for
the duration of the run. This benefits in some VM/kernel internal ways,
to a very small degree too.

There's more, but some may apply to only specific scenarios.

> Sam
> <snip sig stuff>

HTH
-- 
Bill
-------------- next part --------------
A non-text attachment was scrubbed...
Name: signature.asc
Type: application/pgp-signature
Size: 189 bytes
Desc: This is a digitally signed message part
URL: <http://lists.centos.org/pipermail/centos/attachments/20060608/eb560a26/attachment-0005.sig>