Hi all, I have 2 servers online and wish them to communicate and exchange information with each other at times.
I have been developing a web application which is extremely CPU-intensive, and since I don't want to overload the main server which deals with the apache/php/mysql stuff, I got a separate server (a phenom) to deal with it.
I prefer the communication to be secure, low latency, scalable and two-way ie. either server can initiate communication with the other. As the number of users increase, I will need to add more quad-core servers. What would be the easiest/quickest-to-implement way to do this?
Is there a daemon I could use to do this, or would I need to develop my own?
Since I have only ever had a single server, I have never had to do this before, so wouldn't have a clue as to where to start. (I Googled "server communication" but got a pile of semi-useless links)
TIA
D Steward wrote:
Hi all, I have 2 servers online and wish them to communicate and exchange information with each other at times.
I have been developing a web application which is extremely CPU-intensive, and since I don't want to overload the main server which deals with the apache/php/mysql stuff, I got a separate server (a phenom) to deal with it.
I prefer the communication to be secure, low latency, scalable and two-way ie. either server can initiate communication with the other. As the number of users increase, I will need to add more quad-core servers. What would be the easiest/quickest-to-implement way to do this?
Is there a daemon I could use to do this, or would I need to develop my own?
Since I have only ever had a single server, I have never had to do this before, so wouldn't have a clue as to where to start. (I Googled "server communication" but got a pile of semi-useless links)
it depends on what you want to exchange. you don't need to design server software. you can use http (if running an http server is ok), stunnel, ssh, ... you can even use mysql!
On Tue, 2008-03-18 at 10:06 +0100, mouss wrote:
it depends on what you want to exchange. you don't need to design server software. you can use http (if running an http server is ok), stunnel, ssh, ... you can even use mysql!
Hi, the amount of data being exchanged is very small (<1k), but it *must* be low-latency. Communicating via http/mysql would mean both servers polling for changes to a database or flatfile db/logfile, which would be highly undesirable.
D Steward wrote:
On Tue, 2008-03-18 at 10:06 +0100, mouss wrote:
it depends on what you want to exchange. you don't need to design server software. you can use http (if running an http server is ok), stunnel, ssh, ... you can even use mysql!
Hi, the amount of data being exchanged is very small (<1k), but it *must* be low-latency. Communicating via http/mysql would mean both servers polling for changes to a database or flatfile db/logfile, which would be highly undesirable.
you can pass 1K of info on a URL as arguments to a http 'page' which invokes your program in whatever web-language its written in.
On Tue, 2008-03-18 at 07:41 -0700, John R Pierce wrote:
you can pass 1K of info on a URL as arguments to a http 'page' which invokes your program in whatever web-language its written in.
Well, the data is rather sensitive - authentication tokens, IPs and possibly cookies and/or password hashes, so doing a POST is more prudent than putting it in the URL. I forgot about the possibility of invoking an app or script via use of PHP so thanks for giving me a hint.
D Steward wrote:
On Tue, 2008-03-18 at 07:41 -0700, John R Pierce wrote:
you can pass 1K of info on a URL as arguments to a http 'page' which invokes your program in whatever web-language its written in.
Well, the data is rather sensitive - authentication tokens, IPs and possibly cookies and/or password hashes, so doing a POST is more prudent than putting it in the URL. I forgot about the possibility of invoking an app or script via use of PHP so thanks for giving me a hint.
you realize POST data is virtually the same as GET data in an http request? if you use https, then its all encrypted.
re: invoking an app by php... if you're talking about php on the 'server' side, just use cgi-bin the old fashion way and your code is invoked directly.
D Steward wrote:
Well, the data is rather sensitive - authentication tokens, IPs and possibly cookies and/or password hashes, so doing a POST is more prudent than putting it in the URL.
if someone is watching network traffic, POST and GET methods looks identical in terms of how much of data and how its visible.
D Steward wrote:
On Tue, 2008-03-18 at 10:06 +0100, mouss wrote:
it depends on what you want to exchange. you don't need to design server software. you can use http (if running an http server is ok), stunnel, ssh, ... you can even use mysql!
Hi, the amount of data being exchanged is very small (<1k), but it *must* be low-latency. Communicating via http/mysql would mean both servers polling for changes to a database or flatfile db/logfile, which would be highly undesirable.
How often does the data change and how critical is it to have real-time results. Web sites often have thousands of people getting copies of the same thing, or at least computed from the same values even if they are the same only for a short period of time. One approach is to put memcached between your web application and the database for extremely fast repeated access to the same data. It is just a cache layer, though, you still need a persistent database underneath. http://www.danga.com/memcached/
How often does the data change and how critical is it to have real-time results. Web sites often have thousands of people getting copies of the same thing, or at least computed from the same values even if they are the same only for a short period of time.
The servers will exchange sensitive data hopefully with a latency of < 50ms. Ping time between them is 20ms.
One approach is to put memcached between your web application and the database for extremely fast repeated access to the same data. It is just a cache layer, though, you still need a persistent database underneath. http://www.danga.com/memcached/
Ahh, thanks. I forgot about memcached. I am presently using some in-memory MySQL tables, but I'll have to benchmark this against memcached. But the 2nd server was procured to relieve the CPU load on the main one. Even with a 16-way Opteron, this situation would have had to be faced eventually.
It looks to me like OpenAMQ will be the ultimate solution I'm amazed that RedHat with a finger in almost every opensource pie hasn't backed this or come out with their own competing option, since it appears to tie in very nicely with clusters.
D Steward wrote:
How often does the data change and how critical is it to have real-time results. Web sites often have thousands of people getting copies of the same thing, or at least computed from the same values even if they are the same only for a short period of time.
The servers will exchange sensitive data hopefully with a latency of < 50ms. Ping time between them is 20ms.
That's not the relevant question. How often does the data change relative to the number of times you re-use it?
One approach is to put memcached between your web application and the database for extremely fast repeated access to the same data. It is just a cache layer, though, you still need a persistent database underneath. http://www.danga.com/memcached/
Ahh, thanks. I forgot about memcached. I am presently using some in-memory MySQL tables, but I'll have to benchmark this against memcached. But the 2nd server was procured to relieve the CPU load on the main one. Even with a 16-way Opteron, this situation would have had to be faced eventually.
The big advantage of memcached is that you can distribute it over as many servers as you need to keep everything in RAM - and have it shared by any number of clients.
On Wed, 19 Mar 2008, D Steward wrote:
It looks to me like OpenAMQ will be the ultimate solution I'm amazed that RedHat with a finger in almost every opensource pie hasn't backed this or come out with their own competing option, since it appears to tie in very nicely with clusters.
Read closer, RH is involved. It is being (or has been) integrated into the JBoss suite(s).
------------------------------------------------------------------------ Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On 3/19/08, Jim Wildman jim@rossberry.com wrote:
On Wed, 19 Mar 2008, D Steward wrote:
It looks to me like OpenAMQ will be the ultimate solution I'm amazed that RedHat with a finger in almost every opensource pie hasn't backed this or come out with their own competing option, since it appears to tie in very nicely with clusters.
Read closer, RH is involved. It is being (or has been) integrated into the JBoss suite(s).
more info here about AMQP in Redhat
mike
Check out the OpenAMQ project for a secure, scaleable messaging engine and protocol. It may be overkill for what you want (maybe not), but should look good on the resume.
http://www.theserverside.com/news/thread.tss?thread_id=41008
On Tue, 18 Mar 2008, D Steward wrote:
I prefer the communication to be secure, low latency, scalable and two-way ie. either server can initiate communication with the other. As the number of users increase, I will need to add more quad-core servers. What would be the easiest/quickest-to-implement way to do this?
------------------------------------------------------------------------ Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine
On Tue, 2008-03-18 at 05:08 -0400, Jim Wildman wrote:
Check out the OpenAMQ project for a secure, scaleable messaging engine and protocol. It may be overkill for what you want (maybe not), but should look good on the resume.
http://www.theserverside.com/news/thread.tss?thread_id=41008
Wow, this looks good! thanks Jim :)
For now, it might be overkill, but if it scales well, the effort to adopt it will be well worth it.
On Tue, 18 Mar 2008, D Steward wrote:
On Tue, 2008-03-18 at 05:08 -0400, Jim Wildman wrote:
Check out the OpenAMQ project for a secure, scaleable messaging engine and protocol. It may be overkill for what you want (maybe not), but should look good on the resume.
http://www.theserverside.com/news/thread.tss?thread_id=41008
Wow, this looks good! thanks Jim :)
For now, it might be overkill, but if it scales well, the effort to adopt it will be well worth it.
It scales VERY well. Industrial strength.
------------------------------------------------------------------------ Jim Wildman, CISSP, RHCE jim@rossberry.com http://www.rossberry.com "Society in every state is a blessing, but Government, even in its best state, is a necessary evil; in its worst state, an intolerable one." Thomas Paine