Hi,
just check ndbd from MySQL, i tried it with " simple " table format. It's great but there are 2 major drawback.
1. it only supports a single table format which doesn't support foreign keys. ( for me this is problematic )
2. all the database is stored in ram so large data isn't welcome.
But other than that the project looks nice. Later
On Tuesday 23 May 2006 18:32, Mace Eliason wrote:
From what I have learned reading. What do people think about using heart beat between two boxes, rsync to sync the www directories and other files, and use mysql replication?
My only question is I have found in the system that I setup with mysql replication it worked great but if you remove one of the servers and put it back in you have to stop mysql and copy over the newer database and then restart both to get it to replicate correctly.
Is there a way to get replication to work so it will automatically sync the master and slave without having to stop and copy and restart?
Bowie Bailey wrote:
Fabian Arrotin wrote:
On Tue, 2006-05-23 at 12:49 -0700, Dan Trainor wrote:
For the backend storage, it depends what's your budget ... :o) A minimal setup is to use nfs on a central server to host/share the same data across all your machines ... the problem in this config is that the nfs server becomes the single point of failure ... so why not using a simple heartbeat solution for 2 nfs servers acting as one and uses drdb between these 2 nodes for the replication ... Other method is to have a dedicate san with hba in each webservers but that's another budget ... :o)
Just my two cents ...
HI, Fabian -
I've been toying aroudn with both NFS and GFS, but NFS does leave me with a single point of failure. I'd rather not use something like drdb, however. I'm still researching GFS to see if it's a viable alternative for what I'm looking for.
Thanks! -dant
GFS can do the job, but in this case you should have a real shared storage to permit all the servers to access the shared data in the same time ... If you don't want to invest a lot, you can still use iscsi but the single point of failure still exists ...
It tends to be expensive to do away with all points of failure. The best you can do on a budget is try to limit your points of failure to things that tend to have a long lifespan (i.e. almost anything other than servers and individual hard drives).
For another (relatively) low-cost option, check out the AoE storage appliances from Coraid.com. Mine is still in testing, but it was very easy to configure with CentOS4 and I haven't found any problems with it so far. I currently have a 1.2TB storage area shared between three CentOS servers with GFS.
CentOS mailing list CentOS@centos.org http://lists.centos.org/mailman/listinfo/centos