How often does the data change and how critical is it to have real-time results. Web sites often have thousands of people getting copies of the same thing, or at least computed from the same values even if they are the same only for a short period of time.
The servers will exchange sensitive data hopefully with a latency of < 50ms. Ping time between them is 20ms.
One approach is to put memcached between your web application and the database for extremely fast repeated access to the same data. It is just a cache layer, though, you still need a persistent database underneath. http://www.danga.com/memcached/
Ahh, thanks. I forgot about memcached. I am presently using some in-memory MySQL tables, but I'll have to benchmark this against memcached. But the 2nd server was procured to relieve the CPU load on the main one. Even with a 16-way Opteron, this situation would have had to be faced eventually.
It looks to me like OpenAMQ will be the ultimate solution I'm amazed that RedHat with a finger in almost every opensource pie hasn't backed this or come out with their own competing option, since it appears to tie in very nicely with clusters.